Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[Haskell-cafe] about Haskell code written to be "too smart"

13 views
Skip to first unread message

Manlio Perillo

unread,
Mar 24, 2009, 1:42:03 PM3/24/09
to Haskell Cafe mailing list
Hi.

In these days I'm discussing with some friends, that mainly use Python
as programming language, but know well other languages like Scheme,
Prolog, C, and so.

These friends are very interested in Haskell, but it seems that the main
reason why they don't start to seriously learning it, is that when they
start reading some code, they feel the "Perl syndrome".

That is, code written to be "too smart", and that end up being totally
illegible by Haskell novice.

I too have this feeling, from time to time.


Since someone is starting to write the Haskell coding style, I really
suggest him to take this "problem" into strong consideration.


Manlio
_______________________________________________
Haskell-Cafe mailing list
Haskel...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Tim Newsham

unread,
Mar 24, 2009, 2:10:47 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
> These friends are very interested in Haskell, but it seems that the main
> reason why they don't start to seriously learning it, is that when they start
> reading some code, they feel the "Perl syndrome".
>
> That is, code written to be "too smart", and that end up being totally
> illegible by Haskell novice.
>
> I too have this feeling, from time to time.
>
> Since someone is starting to write the Haskell coding style, I really suggest
> him to take this "problem" into strong consideration.

When you think about it, what you are saying is that Haskell programmers
shouldn't take advantage of the extra tools that Haskell provides. Haskell
provides the ability to abstract code beyond what many other programming
systems allow. This abstraction gives you the ability to express things
much more tersely. This makes the code a lot harder to read for people
who are not familiar with the abstractions being used. This can be
overcome with practice and experience.

I'm not trying to say that code can never get too complex. Humans have
some complexity budget and its not too hard to push the limits and blow
your complexity budget. But that is true in any language. The ability to
abstract lets you factor out common patterns that are easy to reuse and
remember (with practice) and lets you spend your complexity budget
elsewhere. As a programmer you still need to use your judgement to
balance complexity against understandability.

[Obviously if you are writing code that you want to be readable by
people who arent well versed in common Haskell idioms, you'd limit
your use of abstractions.]

> Manlio

Tim Newsham
http://www.thenewsh.com/~newsham/

Sjur Gjøstein Karevoll

unread,
Mar 24, 2009, 2:18:36 PM3/24/09
to haskel...@haskell.org
I know what you're saying, in a way. There is much haskell code that's
completely illegible to me. I would say there is a difference between
Haskell and Perl though, in that Perl code is "too smart" aka. "clever",
while Haskell code is usually simply, well, too smart. This means code
written using aspects of covariant generalized applicative combinators
in a closed Hillbert-space like continuous field ring, and other similar
nonsense.

There was a time when "monadic parser combinator" sounded equally
nonsensical to me. It doesn't anymore, and I'm a better programmer for
it, being able to reduce one of my earliest Haskell programs from 200 to
20 lines using that knowledge alone while making it more comprehensible
and maintainable at the same time.

The difference between Haskell and Perl is that Haskell programmers use
clever ideas while Perl programmers use clever abuse of syntax. Ideas,
at least, you have a hope of understanding sometime in the future.

Jake McArthur

unread,
Mar 24, 2009, 2:29:55 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Manlio Perillo wrote:
| These friends are very interested in Haskell, but it seems that the main
| reason why they don't start to seriously learning it, is that when they
| start reading some code, they feel the "Perl syndrome".
|
| That is, code written to be "too smart", and that end up being totally
| illegible by Haskell novice.
|
| I too have this feeling, from time to time.

I used to think this as well, but have since changed my mind about most
cases. It is simply the case that Haskell code is extremely dense. The
more powerful your abstractions, the more functionality you can cram
into one line of code. This can give the appearance of being overly
clever, since we are accustomed to clever code being unnervingly short
and using lots of short variable names and operators. It is generally
encouraged to use single-letter variable names in Haskell because there
are many cases that you haven't a clue what the type of that variable
might be, again due to Haskell's amazing ability to abstract such things
away. All these factors combined just means that you have to concentrate
just as hard to understand one line of Haskell as you might 10 or 20
lines of other languages. There is 10 or 20 times the amount of information.

- - Jake
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJJpIACgkQye5hVyvIUKl8dgCgp+YSwdJpmeVlrlUEnzGGgVBQ
VFoAoMSDkOV+YdAoEbmLjtjza+byEUTi
=9pZZ
-----END PGP SIGNATURE-----

Miguel Mitrofanov

unread,
Mar 24, 2009, 2:37:30 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
Well, I'd say that there is something close to the Perl syndrome, in
some sense. After all, code IS usually very smart. The difference is
that in Perl all smartness is about knowing how the computer works, or
how the interpreter works. In Haskell, instead, the smartness is about
knowing - or inventing - the general setting in which the problem
looks less complex.

Manlio Perillo

unread,
Mar 24, 2009, 2:43:04 PM3/24/09
to Tim Newsham, Haskell Cafe mailing list
Tim Newsham ha scritto:

>> These friends are very interested in Haskell, but it seems that the
>> main reason why they don't start to seriously learning it, is that
>> when they start reading some code, they feel the "Perl syndrome".
>>
>> That is, code written to be "too smart", and that end up being totally
>> illegible by Haskell novice.
>>
>> I too have this feeling, from time to time.
>>
>> Since someone is starting to write the Haskell coding style, I really
>> suggest him to take this "problem" into strong consideration.
>
> When you think about it, what you are saying is that Haskell programmers
> shouldn't take advantage of the extra tools that Haskell provides.

No, I'm not saying this.

But, as an example, when you read a function like:

buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns

that can be rewritten (argument reversed) as:

takeList :: [Int] -> [a] -> [[a]]
takeList [] _ = []
takeList _ [] = []
takeList (n : ns) xs = head : takeList ns tail
where (head, tail) = splitAt n xs

I think that there is a problem.

The buildPartition contains too many "blocks".
And I have read code with even more "blocks" in one line.

It may not be a problem for a "seasoned" Haskell programmer, but when
you write some code, you should never forget that your code will be read
by programmers that can not be at your same level.

I think that many Haskell programmers forget this detail, and IMHO this
is wrong.

> Haskell provides the ability to abstract code beyond what many other
> programming systems allow. This abstraction gives you the ability to
> express things much more tersely. This makes the code a lot harder to
> read for people who are not familiar with the abstractions being used.

The problem is that I have still problems at reading and understanding
code that is too much terse...
Because I have to assemble in my mind each block, and if there are too
many blocks I have problems.

> [...]


Manlio

Manlio Perillo

unread,
Mar 24, 2009, 2:51:49 PM3/24/09
to Jake McArthur, Haskell Cafe mailing list
Jake McArthur ha scritto:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Manlio Perillo wrote:
> | These friends are very interested in Haskell, but it seems that the main
> | reason why they don't start to seriously learning it, is that when they
> | start reading some code, they feel the "Perl syndrome".
> |
> | That is, code written to be "too smart", and that end up being totally
> | illegible by Haskell novice.
> |
> | I too have this feeling, from time to time.
>
> I used to think this as well, but have since changed my mind about most
> cases.

The same for me.

> [...]


> All these factors combined just means that you have to concentrate
> just as hard to understand one line of Haskell as you might 10 or 20
> lines of other languages. There is 10 or 20 times the amount of
> information.
>

This is right.
The problem is that often (IMHO) a function definition can be rewritten
so that it is much more readable.

As an example, with the takeList function I posted.

In other cases, you can just break long lines, introducing intermediate
functions that have a descriptive name *and* a type definition.

Doing this is an art, but a coding style for Haskell should try to
document this.

> [...]


Manlio

Jonathan Cast

unread,
Mar 24, 2009, 2:55:20 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
On Tue, 2009-03-24 at 19:42 +0100, Manlio Perillo wrote:
> Tim Newsham ha scritto:
> >> These friends are very interested in Haskell, but it seems that the
> >> main reason why they don't start to seriously learning it, is that
> >> when they start reading some code, they feel the "Perl syndrome".
> >>
> >> That is, code written to be "too smart", and that end up being totally
> >> illegible by Haskell novice.
> >>
> >> I too have this feeling, from time to time.
> >>
> >> Since someone is starting to write the Haskell coding style, I really
> >> suggest him to take this "problem" into strong consideration.
> >
> > When you think about it, what you are saying is that Haskell programmers
> > shouldn't take advantage of the extra tools that Haskell provides.
>
> No, I'm not saying this.
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>
> that can be rewritten (argument reversed) as:
>
> takeList :: [Int] -> [a] -> [[a]]
> takeList [] _ = []
> takeList _ [] = []
> takeList (n : ns) xs = head : takeList ns tail
> where (head, tail) = splitAt n xs

Huh? This is ugly and un-readable. Seriously.

> I think that there is a problem.

Damn straight. It should be:

> buildPartitions xs ns =
> zipWith take ns $ init $ scanl (flip drop) xs ns

Or, if you're really worried about blocks/line, you can increase the
line count a bit (I do this regularly):

> buildPartitions xs ns =
> zipWith take ns $ -- Select just the indicated prefix of
each element
> init $ -- Skip the last (empty) element
> scanl (flip drop) xs $ -- Cumulatively remove prefixes of
indicated length
> ns

> The buildPartition contains too many "blocks".
> And I have read code with even more "blocks" in one line.
>
> It may not be a problem for a "seasoned" Haskell programmer, but when
> you write some code, you should never forget that your code will be read
> by programmers that can not be at your same level.

Not if I can help it.

More seriously, beginner code belongs in the first two-three chapters of
Haskell programming textbooks, not anywhere else. It's like putting Fun
with Dick & Jane-speak in an adult novel.[1]

> I think that many Haskell programmers forget this detail, and IMHO this
> is wrong.
>
> > Haskell provides the ability to abstract code beyond what many other
> > programming systems allow. This abstraction gives you the ability to
> > express things much more tersely. This makes the code a lot harder to
> > read for people who are not familiar with the abstractions being used.
>
> The problem is that I have still problems at reading and understanding
> code that is too much terse...
> Because I have to assemble in my mind each block, and if there are too
> many blocks I have problems.

jcc

[1] Well, not that bad. Beginner-level code is useful for teaching the
basics of the language; Fun with Dick & Jane is child abuse.

Jake McArthur

unread,
Mar 24, 2009, 3:07:22 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Manlio Perillo wrote:
| This is right.
| The problem is that often (IMHO) a function definition can be rewritten
| so that it is much more readable.
|
| As an example, with the takeList function I posted.

I looked at it, found nothing wrong with the original, and absolutely
hated your "fixed" version. I might have written it like this, instead:

~ buildPartitions xs ns = zipWith take ns . init . scanl (flip drop)
xs $ ns

I think this way separates the different "stages" of the function
somewhat better, but it's barely a change. The original was fine.

| In other cases, you can just break long lines, introducing intermediate
| functions that have a descriptive name *and* a type definition.
|
| Doing this is an art, but a coding style for Haskell should try to
| document this.

Agreed.


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJL1gACgkQye5hVyvIUKnF/ACgjbd+gjolHCiS9tWosbiH3gnX
j0EAn2zbeanj9UUQnl1pnQ+GRdPpYiRj
=h5bU
-----END PGP SIGNATURE-----

Tim Newsham

unread,
Mar 24, 2009, 3:29:19 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
>> When you think about it, what you are saying is that Haskell programmers
>> shouldn't take advantage of the extra tools that Haskell provides.
>
> No, I'm not saying this.
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>
> that can be rewritten (argument reversed) as:
>
> takeList :: [Int] -> [a] -> [[a]]
> takeList [] _ = []
> takeList _ [] = []
> takeList (n : ns) xs = head : takeList ns tail
> where (head, tail) = splitAt n xs

I think this is a perfect example. Haskell allows you to abstract out the
concepts of recursion, zipping and iteration. Your alternative reproduces
these explicitely and intermixes them. You are saying that programmers
should avoid using these higher level abstractions and instead fall back
to more explicit constructs that are, for you, easier to read.

> The problem is that I have still problems at reading and understanding code
> that is too much terse...
> Because I have to assemble in my mind each block, and if there are too many
> blocks I have problems.

It takes practice to read and to write. The benefit is more
expressiveness and more code reuse.

> Manlio

Tim Newsham
http://www.thenewsh.com/~newsham/

Eugene Kirpichov

unread,
Mar 24, 2009, 3:29:56 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
2009/3/24 Manlio Perillo <manlio_...@libero.it>:

> Tim Newsham ha scritto:
>>>
>>> These friends are very interested in Haskell, but it seems that the main
>>> reason why they don't start to seriously learning it, is that when they
>>> start reading some code, they feel the "Perl syndrome".
>>>
>>> That is, code written to be "too smart", and that end up being totally
>>> illegible by Haskell novice.
>>>
>>> I too have this feeling, from time to time.
>>>
>>> Since someone is starting to write the Haskell coding style, I really
>>> suggest him to take this "problem" into strong consideration.
>>
>> When you think about it, what you are saying is that Haskell programmers
>> shouldn't take advantage of the extra tools that Haskell provides.
>
> No, I'm not saying this.
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>

Wow, very cool! And very readable; I actually got the idea of the
function is going to do after reading the scanl (flip drop) and the
rest of the function only convinced me that I was right.

The second version is far worse, because it forces me to think about
what to do if the lists are empty, how to decompose them if they
aren't - all this stuff is 'imperative' and irrelevant to the problem,
and is elegantly omitted in the one-liner.

--
Eugene Kirpichov
Web IR developer, market.yandex.ru

Miguel Mitrofanov

unread,
Mar 24, 2009, 3:30:28 PM3/24/09
to Jake McArthur, Haskell Cafe mailing list
> | As an example, with the takeList function I posted.
>
> I looked at it, found nothing wrong with the original, and absolutely
> hated your "fixed" version. I might have written it like this,
> instead:
>
> ~ buildPartitions xs ns = zipWith take ns . init . scanl (flip
> drop)
> xs $ ns

Maybe it's just me, but I think that

takeList ns xs = evalState (mapM (State . splitAt) ns) xs

or even

takeList = evalState . map (State . splitAt)

would be much clearer than both versions.

Eugene Kirpichov

unread,
Mar 24, 2009, 3:33:43 PM3/24/09
to Miguel Mitrofanov, Haskell Cafe mailing list
Pretty cool once you know what the function does, but I must admit I
wouldn't immediately guess the purpose of the function when written in
this way.

2009/3/24 Miguel Mitrofanov <migue...@yandex.ru>:

--

Eugene Kirpichov
Web IR developer, market.yandex.ru

Jake McArthur

unread,
Mar 24, 2009, 3:43:37 PM3/24/09
to Miguel Mitrofanov, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Miguel Mitrofanov wrote:
| Maybe it's just me, but I think that
|
| takeList ns xs = evalState (mapM (State . splitAt) ns) xs
|
| or even
|
| takeList = evalState . map (State . splitAt)
|
| would be much clearer than both versions.

Definitely. I stuck with only the functions that were already being used
because I figured the point was to make things readable with a limited
set of building blocks. Thanks for sharing though. That was definitely
not a solution that I was thinking of.

- - Jake


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJN9gACgkQye5hVyvIUKn5AACgpLGOwp5asyFxPj6r/sjt4jz/
I7AAoIDDvYbpmWB8/Ag5ui+vNzvbHQ4l
=NxfM
-----END PGP SIGNATURE-----

Manlio Perillo

unread,
Mar 24, 2009, 4:05:18 PM3/24/09
to Jake McArthur, Haskell Cafe mailing list
Jake McArthur ha scritto:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> Manlio Perillo wrote:
> | This is right.
> | The problem is that often (IMHO) a function definition can be rewritten
> | so that it is much more readable.
> |
> | As an example, with the takeList function I posted.
>
> I looked at it, found nothing wrong with the original, and absolutely
> hated your "fixed" version.

With the original version, you have to "follow" 3 separate operations:

Prelude> let xs = [1, 2, 3, 4] :: [Int]
Prelude> let ns = [3, 1] :: [Int]
Prelude> let _1 = scanl (flip drop) xs ns
Prelude> let _2 = init _1
Prelude> let _3 = zipWith take ns _2


With my function, instead, you only have to "follow" 1 operation:

Prelude> (head, tail) = splitAt n xs

> [...]


Manlio

Yitzchak Gale

unread,
Mar 24, 2009, 4:18:08 PM3/24/09
to Miguel Mitrofanov, Haskell Cafe mailing list
Manlio Perillo complained about:

>> buildPartitions xs ns = zipWith take ns . init . scanl (flip drop) xs $ ns

Miguel Mitrofanov wrote:
> takeList = evalState . mapM (State . splitAt)

Ha! Bravo!

As the author of the offending zipWith/scanl version,
I can say that love those State monad one-liners.
However, ironically, I stopped using them for pretty
much the same reason that Manlio is saying.

The difference is that zipWith and scanl are classic Haskell
idioms that any Haskell programmer will learn fairly early
on. Whereas State monad one-liners used to be thought of
as new and fancy and esoteric. But now they are becoming
more mainstream, so perhaps I should go back to them.

So the bottom line is that Manlio is right, really. It's just
that Haskell is still very different than what most
programmers are used to. So it does take a while to
get a feeling for what is "too smart".

Yitz

Jake McArthur

unread,
Mar 24, 2009, 4:32:41 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Manlio Perillo wrote:
| With the original version, you have to "follow" 3 separate operations:
|
| Prelude> let xs = [1, 2, 3, 4] :: [Int]
| Prelude> let ns = [3, 1] :: [Int]
| Prelude> let _1 = scanl (flip drop) xs ns
| Prelude> let _2 = init _1
| Prelude> let _3 = zipWith take ns _2
|
|
| With my function, instead, you only have to "follow" 1 operation:
|
| Prelude> (head, tail) = splitAt n xs

I think you are way oversimplifying your own code.

~ takeList :: [Int] -> [a] -> [[a]]
~ takeList [] _ = []
~ takeList _ [] = []
~ takeList (n : ns) xs = head : takeList ns tail
~ where (head, tail) = splitAt n xs

In order to understand this, I have to look at three different cases, an
uncons, a splitAt, a cons, *and* a recursive call. This is *seven*
different things I have to absorb.

- - Jake


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJQ1MACgkQye5hVyvIUKl+hQCfc7Yd8mi8uXDRTZQa11Pn8zeT
cZMAnApAcI+pr0wpYUP6Z0jHQ2vtf0ze
=Z5ze
-----END PGP SIGNATURE-----

Peter Verswyvelen

unread,
Mar 24, 2009, 4:38:46 PM3/24/09
to Jonathan Cast, Haskell Cafe mailing list
On Tue, Mar 24, 2009 at 7:48 PM, Jonathan Cast <jonath...@fastmail.fm>wrote:

> On Tue, 2009-03-24 at 19:42 +0100, Manlio Perillo wrote:
> > But, as an example, when you read a function like:
> >
> > buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
> >
> > that can be rewritten (argument reversed) as:
> >
> > takeList :: [Int] -> [a] -> [[a]]
> > takeList [] _ = []
> > takeList _ [] = []
> > takeList (n : ns) xs = head : takeList ns tail
> > where (head, tail) = splitAt n xs
>
> Huh? This is ugly and un-readable. Seriously.
>

I think this is subjective. Personally I can understand the second
definition immediately, but the first one requires some puzzling. But that
might be because I'm relatively new to Haskell. Of course the usage of head
and tail in the example is unfortunate, one should not use these shadowing
names.

But aren't these two definitions different algoritms? At first sight I think
the second one is more efficient than the first one.

Manlio Perillo

unread,
Mar 24, 2009, 4:45:39 PM3/24/09
to ga...@sefer.org, Haskell Cafe mailing list, Miguel Mitrofanov
Yitzchak Gale ha scritto:
> [...]

> So the bottom line is that Manlio is right, really. It's just
> that Haskell is still very different than what most
> programmers are used to. So it does take a while to
> get a feeling for what is "too smart".
>

Right, you centered the problem!

The problem is where to place the separation line between "normal" and
"too smart".

Your function is readable, once I mentally separate each step.
For someone with more experience, this operation may be automatic, and
the function may appear totally natural.

When writing these "dense" function, it is important, IMHO, to help the
reader using comments, or by introducing intermediate functions.


Manlio

Manlio Perillo

unread,
Mar 24, 2009, 4:52:27 PM3/24/09
to Jake McArthur, Haskell Cafe mailing list
Jake McArthur ha scritto:
> [...]

> | With my function, instead, you only have to "follow" 1 operation:
> |
> | Prelude> (head, tail) = splitAt n xs
>
> I think you are way oversimplifying your own code.
>
> ~ takeList :: [Int] -> [a] -> [[a]]
> ~ takeList [] _ = []
> ~ takeList _ [] = []
> ~ takeList (n : ns) xs = head : takeList ns tail
> ~ where (head, tail) = splitAt n xs
>
> In order to understand this, I have to look at three different cases, an
> uncons, a splitAt, a cons, *and* a recursive call. This is *seven*
> different things I have to absorb.

These cases are, IMHO, more "natural".

We have a set of equations, pattern matching and recursion.
These are one of the basic building block of Haskell.

The only "foreign" building block is the splitAt function.

But this may be really a question of personal taste or experience.
What is more "natural"?

1) pattern matching
2) recursion
or
1) function composition
2) high level functions

?

> [...]


Manlio

Conal Elliott

unread,
Mar 24, 2009, 4:53:24 PM3/24/09
to Manlio Perillo, Miguel Mitrofanov, Haskell Cafe mailing list
Another helpful strategy for the reader is to get smarter, i.e. to invest
effort in rising to the level of the writer. Or just choose a different
book if s/he prefers. - Conal

Conal Elliott

unread,
Mar 24, 2009, 4:56:09 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
"Recursion is the goto of functional programming". Also, "Do not confuse
what is natural with what is habitual." - Conal

Manlio Perillo

unread,
Mar 24, 2009, 5:12:27 PM3/24/09
to Conal Elliott, Miguel Mitrofanov, Haskell Cafe mailing list
Conal Elliott ha scritto:

> Another helpful strategy for the reader is to get smarter, i.e. to
> invest effort in rising to the level of the writer. Or just choose a
> different book if s/he prefers. - Conal
>

This strategy is doomed to failure, unfortunately.
We live in the real world, compromises are necessary.

> [...]

Peter Verswyvelen

unread,
Mar 24, 2009, 5:17:34 PM3/24/09
to Conal Elliott, Haskell Cafe mailing list, Miguel Mitrofanov
Sometimes that is very hard when the writer is way smarter than the reader
:-)
2009/3/24 Conal Elliott <co...@conal.net>

Lutz Donnerhacke

unread,
Mar 24, 2009, 5:18:33 PM3/24/09
to haskel...@haskell.org
* Manlio Perillo wrote:
> But this may be really a question of personal taste or experience.
> What is more "natural"?
>
> 1) pattern matching
> 2) recursion
> or
> 1) function composition
> 2) high level functions

Composition of library functions is usually much more readable than hand
written recursion, simply because the typical idiom is highlighted instead
of checking yourself, that there is no strange matching against the obvious
case.

Composition of library functions is usually much more efficient and
preferable than hand written recursion, simply because the fine tuned fusion
capabilities.

Conal Elliott

unread,
Mar 24, 2009, 5:18:56 PM3/24/09
to Peter Verswyvelen, Haskell Cafe mailing list, Miguel Mitrofanov
Hah! It sure is. :)

Conal Elliott

unread,
Mar 24, 2009, 5:19:53 PM3/24/09
to Manlio Perillo, Miguel Mitrofanov, Haskell Cafe mailing list
"The reasonable man adapts himself to the world; the unreasonable one
persists in trying to adapt the world to himself. Therefore all progress
depends on the unreasonable man." - George Bernard Shaw

Peter Verswyvelen

unread,
Mar 24, 2009, 5:23:08 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
On Tue, Mar 24, 2009 at 10:11 PM, Manlio Perillo
<manlio_...@libero.it>wrote:

> This strategy is doomed to failure, unfortunately.


So it is the good strategy, because Haskell's slogan is "avoid success at
all cost" :-)


> We live in the real world, compromises are necessary.


I don't think so. It's just that we have different kinds of people with
different skills. If you try to please the whole world, you please nobody.

As a beginner Haskeller, I just know I need more practice. folding is now
natural to me, but monad transformers and applicative stuff not yet, but
that's a matter of time. I just need to practice practice practice.

Gregg Reynolds

unread,
Mar 24, 2009, 5:23:42 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
On Tue, Mar 24, 2009 at 1:42 PM, Manlio Perillo
<manlio_...@libero.it> wrote:
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>
> that can be rewritten (argument reversed) as:
>
> takeList :: [Int] -> [a] -> [[a]]
> takeList [] _         =  []
> takeList _ []         =  []
> takeList (n : ns) xs  =  head : takeList ns tail
>    where (head, tail) = splitAt n xs
>
> I think that there is a problem.

This crops up all the time even in simple mathematics. One way to
provide assistance to newcomers is to provide a quasi-English reading
of the notation. Take as an example a simple set comprehension
expression (using Z email notation,
http://csci.csusb.edu/dick/samples/z.lexis.html):

{ x : Int | 0 < x < 10 /\ x %e Odd @ 2*x }

That's pretty opaque for beginners until they learn to read | as "such
that", %e as "member of" and @ as "generate", so that they can express
the idea in quasi-English: "form a set by taking all integers x such
that ... and ..., then generate the result by doubling them" or the
like. Or take | as "filter" and @ as "map"; the point is it helps to
be able to express it in something like natural language.

Do something similar for your buildPartitions definition and I'll bet
you'll end up with something much more user friendly than takeList.

-gregg

Bas van Dijk

unread,
Mar 24, 2009, 5:24:38 PM3/24/09
to Peter Verswyvelen, Haskell Cafe mailing list
2009/3/24 Peter Verswyvelen <bug...@gmail.com>:

> But aren't these two definitions different algoritms? At first sight I think
> the second one is more efficient than the first one.

Some performance numbers:

----------------------------------------------------------------------

module Main where

import System.Environment (getArgs)
import Control.Monad.State (State(..), evalState)

takeList1, takeList2, takeList3 :: [Int] -> [a] -> [[a]]

takeList1 [] _ = []
takeList1 _ [] = []
takeList1 (n : ns) xs = head : takeList1 ns tail


where (head, tail) = splitAt n xs

takeList2 ns xs = zipWith take ns . init . scanl (flip drop) xs $ ns

takeList3 = evalState . mapM (State . splitAt)

test :: Int -> [[Int]]
test n = takeList1 (take n [1..]) [1..]

main :: IO ()
main = print . sum . map sum . test . read . head =<< getArgs

----------------------------------------------------------------------

compile with: ghc --make TakeList.hs -o takeList1 -O2

$ time ./takeList1 5000
739490938

real 0m6.229s
user 0m5.787s
sys 0m0.342s

$ time ./takeList2 5000
739490938

real 0m5.089s
user 0m4.455s
sys 0m0.348s

$ time ./takeList3 5000
739490938

real 0m6.224s
user 0m5.750s
sys 0m0.347s

----------------------------------------------------------------------

regards

Bas

Dan Piponi

unread,
Mar 24, 2009, 5:26:55 PM3/24/09
to ga...@sefer.org, Haskell Cafe mailing list, Miguel Mitrofanov
> Miguel Mitrofanov wrote:
>> takeList = evalState . mapM (State . splitAt)

> However, ironically, I stopped using them for pretty


> much the same reason that Manlio is saying.

Are you saying there's a problem with this implementation? It's the
only one I could just read immediately. The trick is to see that
evalState and State are just noise for the type inferencer so we just
need to think about mapM splitAt. This turns a sequence of integers
into a sequence of splitAts, each one chewing on the leftovers of the
previous one. *Way* easier than both the zipWith one-liner and the
explicit version. It says exactly what it means, almost in English.
--
Dan

Zachary Turner

unread,
Mar 24, 2009, 5:30:16 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
On Tue, Mar 24, 2009 at 4:11 PM, Manlio Perillo <manlio_...@libero.it>wrote:

> Conal Elliott ha scritto:
>
>> Another helpful strategy for the reader is to get smarter, i.e. to invest
>> effort in rising to the level of the writer. Or just choose a different
>> book if s/he prefers. - Conal
>>
>>
> This strategy is doomed to failure, unfortunately.
> We live in the real world, compromises are necessary.
>

It depends, IMO. Making changes to the programming style one uses, in
particular ones such as you propose, would ultimate lead to programs in
haskell being less flexible and/or powerful than if they are. I'm a bit new
to haskell myself, but I do understand that one of the primary uses cases
and/or motivating factors for using Haskell is when you really just NEED
that extra abstraction and power you get from being able to do these types
of things. Someone once said that "simple problems should be simple and
difficult problems should be possible". That doesn't mean the difficult
problems become EASY. One of the best uses for haskell is solving difficult
problems. It's obviously still going to be difficult to solve, and as such
the writer (and hence by extension the reader) is going to have to be smart
as well.

C++ is actually beginning to suffer the complexity problem as well,
especially with C++0x, but I fundamentally disagree with the added
complexity in C++, specifically because it is a language which is supposed
to excel at solving solve all kinds of problems. Haskell excels at solving
difficult problems, so I don't think the target audience for Haskell
necessarily needs to include people who can't figure out difficult code.
C++ otoh they need to agree on a target audience or set of problems that
it's geared toward, and then either s**t or get off the pot. It's fine if
they keep adding complexity until the cows come home, but just agree up
front that that's what it is and programmers who aren't cut out for it use a
different language. With Haskell I think you have that up-front agreement,
so there's no problem.

Jonathan Cast

unread,
Mar 24, 2009, 5:31:08 PM3/24/09
to Eugene Kirpichov, Haskell Cafe mailing list, Miguel Mitrofanov
On Tue, 2009-03-24 at 22:33 +0300, Eugene Kirpichov wrote:
> Pretty cool once you know what the function does, but I must admit I
> wouldn't immediately guess the purpose of the function when written in
> this way.

I wouldn't immediately guess the purpose of the function written in any
way.

I think, in general, the best way to document the purpose of the
function is

-- | Split a function into a sequence of partitions of specified
lenth
takeList :: [Int] -> [a] -> [[a]]

jcc

Yitzchak Gale

unread,
Mar 24, 2009, 5:35:24 PM3/24/09
to Dan Piponi, Haskell Cafe mailing list, Miguel Mitrofanov
Miguel Mitrofanov wrote:
>>> takeList = evalState . mapM (State . splitAt)

I wrote:
>> However, ironically, I stopped using them for pretty
>> much the same reason that Manlio is saying.

Dan Piponi wrote:
> Are you saying there's a problem with this implementation? It's the

> only one I could just read immediately...


> It says exactly what it means, almost in English.

Yes, I agree. But at a time when the majority
of experienced Haskellers couldn't easily see that because
they weren't comfortable enough with the State monad,
using it would have cost more on average (for debugging,
refactoring, etc.). Whereas now I don't think that's a
problem anymore.

Yitz

Gregg Reynolds

unread,
Mar 24, 2009, 5:38:26 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
On Tue, Mar 24, 2009 at 12:41 PM, Manlio Perillo
<manlio_...@libero.it> wrote:

> I too have this feeling, from time to time.
>

So do I, because I haven't had the time to learn what I need to learn
in order to read the code smoothly. I find that when I do work out
the meaning, most often the style reflects conciseness or
expressiveness, not obfuscatory tricks that the language allows.


>
> Since someone is starting to write the Haskell coding style, I really
> suggest him to take this "problem" into strong consideration.

Rule One of the Haskell Coding Style Handbook: learn Haskell first,
then worry about style. After all, nobody in her right mind would
tackle a French style manual without learning French first. Although
I suppose one could argue that learning Haskell in fact involves
learning various styles. ;)

-gregg

Manlio Perillo

unread,
Mar 24, 2009, 5:43:32 PM3/24/09
to Jonathan Cast, Miguel Mitrofanov, Haskell Cafe mailing list
Jonathan Cast ha scritto:
> [...]

>
> I think, in general, the best way to document the purpose of the
> function is
>
> -- | Split a function into a sequence of partitions of specified
> lenth
> takeList :: [Int] -> [a] -> [[a]]
>

Note that I was not speaking about the best way to document a function.

I was speaking about the best way to write a function, so that it may
help someone who is learning Haskell.

> [...]

Manlio

Ross Mellgren

unread,
Mar 24, 2009, 5:50:53 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
As (yet another?) Haskell newbie, with a day job using Java (where
"keep it simple, stupid" is not a principle, it's a language enforced
requirement), I would much prefer the function is implemented in the
most concise and idiomatic style that the writer is capable of. That
is, either the zipWith...scanl solution (or its variants) or the state
solution.

I've found that I learn considerably more from functions written this
way that also have a good documentation comment than from munching on
the standard pattern matching recursion again and again. If the
function is well described, and short in purpose and text, I can use
the fact that with functional programming (with some exception)
ensures that all I need to understand the behavior should be right in
front of me and I can spend time learning the patterns.

Just my 2 cents,

-Ross

Conal Elliott

unread,
Mar 24, 2009, 5:51:25 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
I'd love to help newbies get the hang of Haskell without having to jump in
the deep (and smart-infested) end first. And I'd love for people to keep
writing smart code for non-newbies to enjoy.

Perhaps a practical suggestion would be some wiki pages devoted to pointing
out code with various learning qualities, to help haskellers of all levels
of experience learn effectively.

- Conal

Jonathan Cast

unread,
Mar 24, 2009, 5:51:42 PM3/24/09
to Manlio Perillo, Miguel Mitrofanov, Haskell Cafe mailing list
On Tue, 2009-03-24 at 22:43 +0100, Manlio Perillo wrote:
> Jonathan Cast ha scritto:
> > [...]
> >
> > I think, in general, the best way to document the purpose of the
> > function is
> >
> > -- | Split a function into a sequence of partitions of specified
> > lenth
> > takeList :: [Int] -> [a] -> [[a]]
> >
>
> Note that I was not speaking about the best way to document a function.
>
> I was speaking about the best way to write a function, so that it may
> help someone who is learning Haskell.

I've already explicitly rejected the claim that professional Haskell
code should be written to aid beginning users. Again, that's what
textbooks are for.

And I was explicitly commenting on the claim that it was obvious, from
any version posted thus far, what the function was supposed to do. Your
suggested code hardly helps make the function's purpose clear; comments
(or, better yet, tests, such as:

prop_length = \ ns xn -> sum ns <= length xn ==>
map length (takeList ns xn) == ns

do a much better job of explaining purpose).

jcc

Loup Vaillant

unread,
Mar 24, 2009, 5:53:13 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
2009/3/24 Manlio Perillo <manlio_...@libero.it>:

> Jonathan Cast ha scritto:
>>
>> [...]
>>
>> I think, in general, the best way to document the purpose of the
>> function is
>>
>>    -- | Split a function into a sequence of partitions of specified
>> lenth
>>    takeList :: [Int] -> [a] -> [[a]]

*That* was what I craved for. With the type and a name like
"partitions", I would hardly have to look at the code at all. The
comment is almost superfluous.

> Note that I was not speaking about the best way to document a function.
>
> I was speaking about the best way to write a function, so that it may help
> someone who is learning Haskell.

Then, the first version plus the documentation above would be perfect.
Instant understanding about the purpose of the function, and insight
about a how to write it.

Loup

Manlio Perillo

unread,
Mar 24, 2009, 5:57:52 PM3/24/09
to Zachary Turner, Haskell Cafe mailing list, Miguel Mitrofanov
Zachary Turner ha scritto:
> [...]

> but I do understand that one of the primary uses
> cases and/or motivating factors for using Haskell is when you really
> just NEED that extra abstraction and power you get from being able to do
> these types of things. Someone once said that "simple problems should
> be simple and difficult problems should be possible". That doesn't mean
> the difficult problems become EASY. One of the best uses for haskell is
> solving difficult problems. It's obviously still going to be difficult
> to solve, and as such the writer (and hence by extension the reader) is
> going to have to be smart as well.
>

I agree with you, and in fact I'm still learning Haskell.
The reason I'm still learning Haskell is because I like its syntax.
And yes, I also like the ability to write efficient function by
composing other function.

But there is a limit.
In C you have the ability to write assembler code, but one usually think
twice before doing so, since it will become unreadable to most of the
people.

If you think that writing low level assembler code is the best solution,
you should at least document it well, instead of assuming that the
reader is as smart as you.


As I have written at the begin of the thread, there are people I know
(*much* more smarter then me), that keep themselves away from Haskell
because they start to read some code, and they feel something is wrong.

They *think* "ah, the author wrote code in this way just to show how
smart he is; how can I learn a language if most of the available code is
written in this way"?

Note the use of the verb "think".
This is only a sensation, and it is wrong; but sensations are important.

Manlio Perillo

unread,
Mar 24, 2009, 6:00:01 PM3/24/09
to haskel...@haskell.org
Erik de Castro Lopo ha scritto:

> Manlio Perillo wrote:
>
>> I was speaking about the best way to write a function, so that it may
>> help someone who is learning Haskell.
>
> I've been learning Haskell for about 3 months.
>
> I think its a mistake to write code so that its easy for someone
> learning Haskell to read it. Code should be written to be easily
> read by other experienced users of the langauge.
>

Note that to write code so that its easy to read, does not mean rewrite
the code as I did in the example.

It also means to add good comments, in the right places.

> Erik

Manlio Perillo

unread,
Mar 24, 2009, 6:04:24 PM3/24/09
to Conal Elliott, Haskell Cafe mailing list, Miguel Mitrofanov
Conal Elliott ha scritto:

> I'd love to help newbies get the hang of Haskell without having to jump
> in the deep (and smart-infested) end first. And I'd love for people to
> keep writing smart code for non-newbies to enjoy.
>
> Perhaps a practical suggestion would be some wiki pages devoted to
> pointing out code with various learning qualities, to help haskellers of
> all levels of experience learn effectively.
>

Yes, this is a good start.

Advices to people learning Haskell about how to learn reading code.
And advices to experienced Haskell programmers about how to document
their code so that it may help less experienced programmers.

IMHO, this should also go in the future Haskell coding style.

Peter Verswyvelen

unread,
Mar 24, 2009, 6:04:55 PM3/24/09
to Miguel Mitrofanov, Haskell Cafe mailing list
On Tue, Mar 24, 2009 at 8:29 PM, Miguel Mitrofanov <migue...@yandex.ru>wrote:

> takeList ns xs = evalState (mapM (State . splitAt) ns) xs
>>
>
> or even
>
> takeList = evalState . map (State . splitAt)
>
> would be much clearer than both versions.


Brilliant. As a newbie, I knew all these functions, I have used them all.
When I saw both initial implementations, I tried to write what you did, but
failed, I didn't see the pattern, failed to pick the correct functions in my
head, failed to make the puzzle.

I guess that is the real power of Haskell. In imperative languages, the more
you practice, the better you get in avoiding the imperative pitfalls. In
functional languages, more practice really results in more and more
productivity because you recognize the patterns; the design patterns are not
just thoughts but real functions you can reuse.

John Melesky

unread,
Mar 24, 2009, 6:06:38 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
On Mar 24, 2009, at 1:51 PM, Manlio Perillo wrote:
> But this may be really a question of personal taste or experience.
> What is more "natural"?
>
> 1) pattern matching
> 2) recursion
> or
> 1) function composition
> 2) high level functions

I think, actually, that one of the fundamental intuitions of (modern)
Haskell programming is that recursion should *rarely* be explicit,
because the majority of places you'd use recursion all fall into a few
different patterns (hence, the proliferation of maps and folds).

Once you get those recursive operations firmly embedded in your mind,
then combining them becomes much simply, and you can reason about more
complex transformations much more easily.

-johnnnnnn

Manlio Perillo

unread,
Mar 24, 2009, 6:15:41 PM3/24/09
to Dan Piponi, Miguel Mitrofanov, Haskell Cafe mailing list
Dan Piponi ha scritto:

>> Miguel Mitrofanov wrote:
>>> takeList = evalState . mapM (State . splitAt)
>
>> However, ironically, I stopped using them for pretty
>> much the same reason that Manlio is saying.
>
> Are you saying there's a problem with this implementation? It's the
> only one I could just read immediately.

Yes, you understand it immediately once you know what a state monad is.
But how well is introduced, explained and emphasized the state monad in
current textbooks?

When I started learning Haskell, the first thing I learned was recursion
and pattern matching.

So, this may be the reason why I find more readable my takeList solution.


> [...]


Manlio

Jake McArthur

unread,
Mar 24, 2009, 6:18:57 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Manlio Perillo wrote:
| But this may be really a question of personal taste or experience.
| What is more "natural"?
|
| 1) pattern matching
| 2) recursion
| or
| 1) function composition
| 2) high level functions

Definitely the latter two. They are easier to comprehend (assuming each
of the smaller abstractions are already internalized) and more
efficient. Arguably, this building-block approach is the whole *point*
of Haskell.

- - Jake
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJXDwACgkQye5hVyvIUKl/VQCgwspG1HDiGNwEQUFA/Wus6GYD
GkkAnRpiP50p17S8Pa9CEvxMFz4cDiZF
=/Gi/
-----END PGP SIGNATURE-----

Jonathan Cast

unread,
Mar 24, 2009, 6:24:12 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
On Tue, 2009-03-24 at 23:15 +0100, Manlio Perillo wrote:
> Dan Piponi ha scritto:
> >> Miguel Mitrofanov wrote:
> >>> takeList = evalState . mapM (State . splitAt)
> >
> >> However, ironically, I stopped using them for pretty
> >> much the same reason that Manlio is saying.
> >
> > Are you saying there's a problem with this implementation? It's the
> > only one I could just read immediately.
>
> Yes, you understand it immediately once you know what a state monad is.
> But how well is introduced, explained and emphasized the state monad in
> current textbooks?
>
> When I started learning Haskell, the first thing I learned was recursion
> and pattern matching.

You know, this might actually need to be looked into.

You need to know recursion and pattern-matching to *write* re-usable
higher-order functions, but how appropriate is that as the first thing
taught?

jcc

Conal Elliott

unread,
Mar 24, 2009, 6:32:38 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
>
> And advices to experienced Haskell programmers about how to document their
> code so that it may help less experienced programmers.
>

Manlio -- You may be missing the point of my suggestion, which is to help
people *find* code that suits them, rather than changing anyone's coding
style. Optimizing code for one segment of readers is pessimizing it for
another. Instead of dumbing down the smart code, I'd like to help your
friends to help each other find dumber code, *and* to help others of us find
smarter code.

- Conal

Jake McArthur

unread,
Mar 24, 2009, 6:35:45 PM3/24/09
to Jonathan Cast, Miguel Mitrofanov, Haskell Cafe mailing list
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jonathan Cast wrote:
| You know, this might actually need to be looked into.
|
| You need to know recursion and pattern-matching to *write* re-usable
| higher-order functions, but how appropriate is that as the first thing
| taught?

An excellent question!

Coincidentally, I was just having a conversation with my girlfriend
about programming with "building blocks." She described her main hurdle
with programming at the moment, which is getting over the fact that she
is used to working with tangible objects that you just put together in
the appropriate way and her mind expects programming to work the same
way, but it doesn't, at least in the languages she has looked at so far.
I hypothesized that a language emphasizing combinators might be more
intuitive to her than a language emphasizing loops and imperative steps
for precisely this reason. I'm not entirely sure that she bought it, but
she seemed to agree that it at least sounds nice in theory.

Now I just have to convince her to become a willing subject in this
experiment. ;)

This question makes me wonder... why is explicit recursion taught first?
I can't help but think now that it may be because those coming from
imperative languages are used to writing loops, and recursion is the
closest to loops that we have.

- - Jake
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAknJYC4ACgkQye5hVyvIUKkExwCeLmejblGHyjdGsEkMykJ5bAJY
pZ0AniaEpdgHCZzz2AALFYQ7X9WYEzws
=R0qo
-----END PGP SIGNATURE-----

Conal Elliott

unread,
Mar 24, 2009, 6:41:03 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Miguel Mitrofanov
Manlio,

We live in the age of participation -- of co-education. Don't worry about
text-books. Contribute to some wiki pages & blogs today that share these
smart techniques with others.

<twocents>Learning/progress is mainly results when people respond to their
own incomprehension by moving into new & challenging ideas, not by banishing
them. Puzzlement can be met by resistance or by embracing &
learning.</twocents>

Conal Elliott

unread,
Mar 24, 2009, 6:44:29 PM3/24/09
to Jake McArthur, Haskell Cafe mailing list, Miguel Mitrofanov
>
> This question makes me wonder... why is explicit recursion taught first?
> [...]
>

Perhaps also because teachers, being older than their students, are often
mired in outdated thinking.

Benja Fallenstein

unread,
Mar 24, 2009, 6:59:28 PM3/24/09
to Peter Verswyvelen, Miguel Mitrofanov, Haskell Cafe mailing list
2009/3/24 Peter Verswyvelen <bug...@gmail.com>:

>> This strategy is doomed to failure, unfortunately.
>
> So it is the good strategy, because Haskell's slogan is "avoid success at
> all cost" :-)


IN THE YEAR 1987, WAR WAS BEGINNING

BIG, IMPERATIVE SOFTWARE BEHEMOTHS CLASHED IN A STATE OF IMPURITY

UNDER THE SHADOW OF FEAR AND DOUBT, COLONY BY COLONY FELL INTO TYPELESS ANARCHY

WHOLE PLANETS WERE SCROUNGED BY TERRIBLE SEGFAULTS

THE HUNGER FOR A NEW PARADIGM WAS GNAWING AT THE ROOTS OF THE CIVILIZED UNIVERSE


MEANWHILE, IN A GALAXY FAR, FAR AWAY, A SMALL GROUP OF LAZY FUNCTIONAL
PROGRAMMERS CREATED A LANGUAGE

IT WAS OUR LAST, BEST HOPE TO AVOID SUCCESS AT ALL COST...

IT FAILED


IT EVOLVED


THERE ARE 8,581 IMPLEMENTATIONS SUPPORTING 935,842,712 EXTENSIONS

THEY LOOK AND FEEL ... FUNCTIONAL

SOME ARE PROGRAMMED TO THINK THAT THEY AREN'T IMPERATIVE AT ALL

AT LEAST ONE IS ACTUALLY USED


ONCE, IT HAD BEEN OUR LAST, BEST HOPE TO AVOID SUCCESS

IN THE YEAR 2009, IT BECAME SOMETHING GREATER:

OUR LAST, BEST HOPE FOR BLASTING THE INFERIOR LANGUAGES OUT OF THE SKY
(WITH LAZY CLASS)


YOU HAVE NO CHANCE TO SURVIVE MAP YOUR BIND

Manlio Perillo

unread,
Mar 24, 2009, 7:07:55 PM3/24/09
to Conal Elliott, Haskell Cafe mailing list, Miguel Mitrofanov
Conal Elliott ha scritto:

> Manlio,
>
> We live in the age of participation -- of co-education. Don't worry
> about text-books. Contribute to some wiki pages & blogs today that
> share these smart techniques with others.
>

When I started learning Haskell (by my initiative), what I did was:

1) Quick reading of the first tutorial I found on the wiki.
http://darcs.haskell.org/yaht/yaht.pdf, if i remember correctly

2) Quick reading the Haskell Report

3) Reading another tutorial:
http://www.haskell.org/tutorial/

4) Reading again the Haskell Report

5) A lot of time spent finding good tutorials.
Yet, I did not knew what monads were, I just
felt that monads were some strange and advanced feature

.. A period where I stop looking for Haskell

6) Found some good tutorial about what monads are, but yet I did not
knew anything about state monads, monad transformers, and so.

.. Another period were I stop looking for Haskell

7) The Real Word Haskell book.
Finally in one book all "advanced" concepts.

I read the book online.
I found the book good, but i think it is too dispersive in some
chapters.
I already forgot some of the concepts I read, mostly because in some
chapter I get annoyed, and started skipping things, or reading it
quickly.

I will buying a copy in May, at Pycon Italy
(were there will be a stand by O'Really), so that I can read it
again.

8) New impetus at learning Haskell.
I read again the Haskell Report, and the
"A Gentle Introduction to Haskell".

I finally started to understand how things works

7) Start to write some "real" code.

I now I'm able to understand much of the code I read.
But for some kind of code I still have problems.

Manlio Perillo

unread,
Mar 24, 2009, 7:26:14 PM3/24/09
to Conal Elliott, Haskell Cafe mailing list
Conal Elliott ha scritto:

> And advices to experienced Haskell programmers about how to document
> their code so that it may help less experienced programmers.
>
>
> Manlio -- You may be missing the point of my suggestion,

Ah, sorry.

> which is to
> help people *find* code that suits them, rather than changing anyone's
> coding style. Optimizing code for one segment of readers is pessimizing
> it for another. Instead of dumbing down the smart code, I'd like to
> help your friends to help each other find dumber code, *and* to help
> others of us find smarter code.
>


This may be hard to do.

However I already suggested to start reading the Prelude code, from the
Haskell Report.

Donn Cave

unread,
Mar 24, 2009, 7:43:51 PM3/24/09
to Haskell Cafe mailing list
> Manlio -- You may be missing the point of my suggestion, which is to help

> people *find* code that suits them, rather than changing anyone's coding
> style. Optimizing code for one segment of readers is pessimizing it for
> another. Instead of dumbing down the smart code, I'd like to help your
> friends to help each other find dumber code, *and* to help others of us find
> smarter code.

If he really intended to promote some dumb code as a better
alternative to some otherwise equivalent smart code, then I must
have missed his point.

For me, when people defend a practice with notions like "programmer
needs be smarter/more responsible/better educated", that's like the
institutional equivalent of a "code smell". You see it everywhere,
too. C/C++ programmers will tell you its storage model is fine, just
"programmer needs to be more ..."

C's storage model does have its advantages, and smart code is
presumably a good thing too. But for example, exercises like just
stripping a function of extraneous parameter identifiers doesn't make
it smart, while it may make it harder for someone to understand it
at a glance. I do it myself, even though I claim to detest it,
which may tell us something about the appeal of exercises like that.

Go ahead and write smart, clearly the benefits outweigh the cost,
but tell us that there's no cost, no problem here if a reader who
knows Haskell has a hard time following? >> "institution smell."

Donn

Clive Brettingham-Moore

unread,
Mar 24, 2009, 7:48:50 PM3/24/09
to Haskell Cafe mailing list
Code like that is why I love Haskell, while I haven't written a Haskell
program in years it is still a joy to read (much more so than the pretty
good zipWith version).
In reference to later comments: if you don't know Monads, you don't know
Haskell; that goes double for high order functions.
So really the only place where this code may be inappropriate is in a
beginner tutorial (unless you are trying to show why they need to learn
more!).
C

Miguel Mitrofanov wrote:

> takeList ns xs = evalState (mapM (State . splitAt) ns) xs
>
> or even
>
> takeList = evalState . map (State . splitAt)
>

_______________________________________________

Jonathan Cast

unread,
Mar 24, 2009, 7:50:15 PM3/24/09
to Donn Cave, Haskell Cafe mailing list
On Tue, 2009-03-24 at 16:43 -0700, Donn Cave wrote:
> If he really intended to promote some dumb code as a better
> alternative to some otherwise equivalent smart code,

`Smart' is Manlio's term --- or, rather, his characterization of his
friends' reaction upon seeing some inscrutable piece of (apparent)
Haskell golf or (seemingly) pointless code. The code seems excessively
clever to them; when Manlio's example is merely clear, well-written,
concise, and declarative, rather than operational, in intention.

> ...

> Go ahead and write smart, clearly the benefits outweigh the cost,
> but tell us that there's no cost, no problem here if a reader who
> knows Haskell has a hard time following?

What reader who knows Haskell? We have a programmer who is,
self-confessedly, just learning Haskell, not really proficient; we have
is friends, who, by his statement of the problem do not know Haskell at
all; and we have some un-specified group of other developers who, by
selection, barely know Haskell or do not know it at all --- that is,
developers who are still in the process of learning. I think your
``reader who knows Haskel'' has no-where to here figured in the
discussion.

jcc

Alberto G. Corona

unread,
Mar 24, 2009, 9:09:57 PM3/24/09
to haskel...@haskell.org, Manlio Perillo
Perhaps is much easier to create one line compositions of functions in
haskell rather than in C because the type system helps a lot in the process.
However, reusability of source code and maintainability has never been taken
seriously by haskell programmers, simply because there are no industrial
projects in Haskell with dozens of people with different skills that come
and go. Because that, probably the early C programers were far more
exhuberant than the current C++ and Java programmers now. To have a broad
base of users and/or to assure a cheap programmers for your industrial
application has the servitude to "the rule of least power". That is another
reason for the lemma: "Avoid success at all costs"

The rule of least power
(http://www.w3.org/2001/tag/doc/leastPower.html)
<http://www.w3.org/2001/tag/doc/leastPower.html>Originally
written by Tim Berners Lee;. For publishing (and, arguably, for code
reusability) "the best language is the least powerful".

This depressing conclusions can be overcomed if we consider that the rule of
least power favours turing incomplete DSLs, so every industrial development
can be decomposed in two groups wich demands two different skills: 1)
DSLs creation 2) DSL programming


2009/3/24 Manlio Perillo <manlio_...@libero.it>

John Meacham

unread,
Mar 24, 2009, 9:17:28 PM3/24/09
to haskel...@haskell.org
On Tue, Mar 24, 2009 at 10:29:55PM +0300, Miguel Mitrofanov wrote:
> Maybe it's just me, but I think that

>
> takeList ns xs = evalState (mapM (State . splitAt) ns) xs
>
> or even
>
> takeList = evalState . map (State . splitAt)
>
> would be much clearer than both versions.

I love it! It wouldn't occur to me to utilize State like this (too used
to thinking of it as a black box rather than whats inside of it). quite
a lot of useful information to learn can be expressed in a line of
haskell. sort of like a zen koan. :)

John

--
John Meacham - ⑆repetae.net⑆john⑈

Richard O'Keefe

unread,
Mar 24, 2009, 9:53:54 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list, Jake McArthur
May I suggest that the most important thing missing from
all these versions of the function is a comment?
Most of the time I shouldn't *care* how the function works.
(And that, for me, is one of the key benefits of Haskell.)

Jake McArthur

unread,
Mar 24, 2009, 10:07:47 PM3/24/09
to Richard O'Keefe, Haskell Cafe mailing list, Jake McArthur
Richard O'Keefe wrote:
> May I suggest that the most important thing missing from
> all these versions of the function is a comment?
> Most of the time I shouldn't *care* how the function works.
> (And that, for me, is one of the key benefits of Haskell.)

Although in this case, a proper name and type signature is probably
enough. :)

- Jake

Gwern Branwen

unread,
Mar 24, 2009, 10:31:59 PM3/24/09
to Manlio Perillo, Haskell Cafe mailing list
On Tue, Mar 24, 2009 at 2:42 PM, Manlio Perillo
<manlio_...@libero.it> wrote:
> Tim Newsham ha scritto:
>>>
>>> These friends are very interested in Haskell, but it seems that the main
>>> reason why they don't start to seriously learning it, is that when they
>>> start reading some code, they feel the "Perl syndrome".
>>>
>>> That is, code written to be "too smart", and that end up being totally
>>> illegible by Haskell novice.
>>>
>>> I too have this feeling, from time to time.
>>>
>>> Since someone is starting to write the Haskell coding style, I really
>>> suggest him to take this "problem" into strong consideration.
>>
>> When you think about it, what you are saying is that Haskell programmers
>> shouldn't take advantage of the extra tools that Haskell provides.
>
> No, I'm not saying this.
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>
> that can be rewritten (argument reversed) as:

>
> takeList :: [Int] -> [a] -> [[a]]
> takeList [] _         =  []
> takeList _ []         =  []
> takeList (n : ns) xs  =  head : takeList ns tail
>    where (head, tail) = splitAt n xs
..
>> [...]
>
>
> Manlio

Correct me if I'm wrong, but isn't this an example against your
thesis? Your two definitions apparently define different things.

{-# LANGUAGE NoMonomorphismRestriction #-}
import Test.QuickCheck

test = (\x y -> buildPartitions x y == takeList y x)

buildPartitions :: [a] -> [Int] -> [[a]]
buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns

takeList :: [Int] -> [a] -> [[a]]

takeList [] _ = []
takeList _ [] = []
takeList (n : ns) xs = head : takeList ns tail
where (head, tail) = splitAt n xs

{-
*Main Control.Monad Data.Char Data.List> quickCheck test
quickCheck test^J
<interactive>:1:11:
Warning: Defaulting the following constraint(s) to type `()'
`Eq a' arising from a use of `test' at <interactive>:1:11-14
`Arbitrary a'
arising from a use of `quickCheck' at <interactive>:1:0-14
`Show a' arising from a use of `quickCheck' at <interactive>:1:0-14
In the first argument of `quickCheck', namely `test'
In a stmt of a 'do' expression: it <- quickCheck test
*** Failed! Falsifiable (after 2 tests):
[]
[0]
-}

--
gwern

deech

unread,
Mar 24, 2009, 10:39:33 PM3/24/09
to
For those folks following along at home, a small clarification, this
amazing one-liner:

>takeList = evalState . map (State . splitAt)

errors in GHCI with:
Couldn't match expected type `State s a'
against inferred type `[State [a1] [a1]]'
Expected type: [Int] -> State s a
Inferred type: [Int] -> [State [a1] [a1]]
In the second argument of `(.)', namely `map (State . splitAt)'
In the expression: evalState . map (State . splitAt)

The correct code is:


>takeList = evalState . mapM (State . splitAt)

And a test run:
>*Main Data.Map Control.Monad.State> takeList [1,2,3,4,5] $ repeat 'x'
["x","xx","xxx","xxxx","xxxxx"]

Thanks so much for helping me think about State this way.
-deech

> Haskell-C...@haskell.orghttp://www.haskell.org/mailman/listinfo/haskell-cafe


Erik de Castro Lopo

unread,
Mar 24, 2009, 11:12:18 PM3/24/09
to haskel...@haskell.org
Jake McArthur wrote:

> Richard O'Keefe wrote:
> > May I suggest that the most important thing missing from
> > all these versions of the function is a comment?
> > Most of the time I shouldn't *care* how the function works.
> > (And that, for me, is one of the key benefits of Haskell.)
>
> Although in this case, a proper name and type signature is probably
> enough. :)

I trust type signatures much more than comments because I
know the compiler actually verifies the type signature.

Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/

wren ng thornton

unread,
Mar 24, 2009, 11:32:58 PM3/24/09
to Haskell Cafe
Manlio Perillo wrote:
> But this may be really a question of personal taste or experience.
> What is more "natural"?
>
> 1) pattern matching
> 2) recursion
> or
> 1) function composition
> 2) high level functions


Which is more "natural":
* C-style for-loops (aka assembly while-loops), or
* any modern language's foreach loops (aka iterators)?

Following directly from the Rule of Least Power, if you can get away
with foreach then that's what you should use. Why? Because the less
power the construct has, the fewer corner cases and generalizations a
reader of the code needs to consider. Now, just because iterators exist
does not mean that one should never use the more general tool. If you're
fighting to break out of your chosen straitjacket, then chances are it's
the wrong one to use in the first place; it'd be clearer to use more
power and have less fighting.

Both of these conclusions seem quite natural to me, even from before
learning Haskell. It seems, therefore, that "naturality" is not the
proper metric to discuss. It's oft overlooked, but the fact is that
expressivity comes not from more formal power, but from _less_.

* A human's (or any vertebrate's) range of motion is severely crippled
when compared to that of an amoeba; and yet it is those limitations
which provide the structure necessary to perform greater tasks such as
grasping, lifting, jumping, etc.

* Natural language has a limited range of words and syntactic
constructs, but gives the larger-enough building blocks to enable
unconstrained communication; whereas a language with a unique word for
every utterance (arguably simpler) is impossible to learn.

* Regular expressions (and other classes of automata) have severe
limitations on formal power, and yet these constraints enable poly-time
algorithms for intersection, union, etc.

* Haskell's type system (sans extensions) is not Turing complete, yet
this enables us to infer types rather than requiring annotations or proofs.


The contemporary state of scientific research is focused heavily on the
idea of reductionism (the idea of being able to reduce all biology to
chemistry, all chemistry to physics, all computer science to
mathematics, etc). But as any systems theorist will tell you, this
approach is misguided if the goal is a Theory of Everything. As per the
famous book: no matter how much you learn about quarks, that tells you
nothing about jaguars.

At every step of reduction, there is an increase in formal power and a
concomitant loss of information. Even perfect knowledge of quarks and
perfect simulation software isn't enough, because you've lost the
_abstraction_ that is "jaguar". You can simulate it, emulate it, model
it, but you've lost the high-level perspective that says jaguars are
different and more interesting than an arbitrary simulation of a
collection of quarks. (And it's doubtful we'll ever have the omniscience
to get even that far.)

While primitive recursion and case matching are _fundamental_ (that is,
at the bottom of a reductionist tower), that does not entail that they
are _central_ (that is, a ubiquitous pattern at every resolution of
reduction). Church encoding, SKI combinators, Curry-Howard isomorphism,
and the like are also fundamental topics to teach and understand; but
they're rarely ones that should be central to a program or library.

Now, many Haskellers (like good scientists) bristle at this fundamental
nature of things. And in response we're constantly coming up with new
generalizations which have little-enough structure to be descriptive
while having big-enough structure to be interesting. If there's too much
structure, it's boilerplate and therefore unusable; if there's too
little, it has no generality and is therefore unhelpful. But somewhere
between those extremes someone has to make a judgment call and decide
whether some particular pattern measures up to the metric of being
helpful and usable. If it does, then everyone (whose domain it covers)
should learn it and use it because it simplifies programming from a
high-level of design.

Giants. Shoulders. Etc.

--
Live well,
~wren

Heinrich Apfelmus

unread,
Mar 25, 2009, 2:53:26 AM3/25/09
to haskel...@haskell.org
Manlio Perillo wrote:
> Conal Elliott ha scritto:
>> Manlio,
>>
>> We live in the age of participation -- of co-education. Don't worry
>> about text-books. Contribute to some wiki pages & blogs today that
>> share these smart techniques with others.
>>
>
> When I started learning Haskell (by my initiative), what I did was:
>
> [steps 1) - 9), mostly internet tutorials ]

I think you'd have had a much easier time by starting with a proper book
right away, like Richard Bird's "Introduction to Functional Programming
in Haskell", accompanied by Real World Haskell. You see, the reason that
books cost money is (should be) high quality content. :)


Regards,
apfelmus

--
http://apfelmus.nfshost.com

Heinrich Apfelmus

unread,
Mar 25, 2009, 3:00:28 AM3/25/09
to haskel...@haskell.org
Dan Piponi wrote:

>> Miguel Mitrofanov wrote:
>>> takeList = evalState . mapM (State . splitAt)
>
>> However, ironically, I stopped using them for pretty
>> much the same reason that Manlio is saying.
>
> Are you saying there's a problem with this implementation? It's the
> only one I could just read immediately. The trick is to see that
> evalState and State are just noise for the type inferencer so we just
> need to think about mapM splitAt. This turns a sequence of integers
> into a sequence of splitAts, each one chewing on the leftovers of the
> previous one. *Way* easier than both the zipWith one-liner and the
> explicit version. It says exactly what it means, almost in English.

I couldn't agree more. In other words, splitAt is really to be thought
of as a function that lives in the state monad.

Colin Adams

unread,
Mar 25, 2009, 3:51:05 AM3/25/09
to Jonathan Cast, Miguel Mitrofanov, Haskell Cafe mailing list
2009/3/24 Jonathan Cast <jonath...@fastmail.fm>:
> On Tue, 2009-03-24 at 22:33 +0300, Eugene Kirpichov wrote:
>> Pretty cool once you know what the function does, but I must admit I
>> wouldn't immediately guess the purpose of the function when written in
>> this way.
>
> I wouldn't immediately guess the purpose of the function written in any
> way.
>
> I think, in general, the best way to document the purpose of the
> function is
>
>    -- | Split a function into a sequence of partitions of specified
> lenth

>    takeList :: [Int] -> [a] -> [[a]]

Thank-you Jonathan.

That's the first message in this thread I've manage to understand.

Thomas Hartman

unread,
Mar 25, 2009, 3:52:32 AM3/25/09
to Haskell Cafe
What about

import Data.List

partAt n xs =
 let (beg,end) = splitAt n xs
 in beg : ( case end of
              [] -> []
              xs -> partAt n xs)

t = partAt 3 [1..10]


It's tail recursive (I think!) and should be pretty easy to understand
even for a beginner, no?

2009/3/24 Manlio Perillo <manlio_...@libero.it>:


> Tim Newsham ha scritto:
>>>
>>> These friends are very interested in Haskell, but it seems that the main
>>> reason why they don't start to seriously learning it, is that when they
>>> start reading some code, they feel the "Perl syndrome".
>>>
>>> That is, code written to be "too smart", and that end up being totally
>>> illegible by Haskell novice.
>>>
>>> I too have this feeling, from time to time.
>>>
>>> Since someone is starting to write the Haskell coding style, I really
>>> suggest him to take this "problem" into strong consideration.
>>
>> When you think about it, what you are saying is that Haskell programmers
>> shouldn't take advantage of the extra tools that Haskell provides.
>
> No, I'm not saying this.
>
> But, as an example, when you read a function like:
>
> buildPartitions xs ns = zipWith take ns . init $ scanl (flip drop) xs ns
>
> that can be rewritten (argument reversed) as:
>

> takeList :: [Int] -> [a] -> [[a]]

> takeList [] _         =  []
> takeList _ []         =  []
> takeList (n : ns) xs  =  head : takeList ns tail
>    where (head, tail) = splitAt n xs
>

> I think that there is a problem.
>
> The buildPartition contains too many "blocks".
> And I have read code with even more "blocks" in one line.
>
> It may not be a problem for a "seasoned" Haskell programmer, but when you
> write some code, you should never forget that your code will be read by
> programmers that can not be at your same level.
>
> I think that many Haskell programmers forget this detail, and IMHO this is
> wrong.
>
>> Haskell provides the ability to abstract code beyond what many other
>> programming systems allow.  This abstraction gives you the ability to
>> express things much more tersely.  This makes the code a lot harder to read
>> for people who are not familiar with the abstractions being used.
>
> The problem is that I have still problems at reading and understanding code
> that is too much terse...
> Because I have to assemble in my mind each block, and if there are too many
> blocks I have problems.
>
>> [...]
>
>
> Manlio

Thomas Hartman

unread,
Mar 25, 2009, 4:25:29 AM3/25/09
to Haskell Cafe
sorry, wrong function.

should be

partitions [] xs = []
partitions (n:parts) xs =


let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []

xs -> partitions parts xs)


t = partitions [1,2,3] [1..10]


which is not quite as nice, I admit.

2009/3/25 Thomas Hartman <tphy...@gmail.com>:

Manlio Perillo

unread,
Mar 25, 2009, 8:29:05 AM3/25/09
to Heinrich Apfelmus, haskel...@haskell.org
Heinrich Apfelmus ha scritto:

> Manlio Perillo wrote:
>> Conal Elliott ha scritto:
>>> Manlio,
>>>
>>> We live in the age of participation -- of co-education. Don't worry
>>> about text-books. Contribute to some wiki pages & blogs today that
>>> share these smart techniques with others.
>>>
>> When I started learning Haskell (by my initiative), what I did was:
>>
>> [steps 1) - 9), mostly internet tutorials ]
>
> I think you'd have had a much easier time by starting with a proper book
> right away, like Richard Bird's "Introduction to Functional Programming
> in Haskell", accompanied by Real World Haskell.

Unfortunately, one year ago Real World Haskell was not here.
And note that I have no problems with basic functional programming concepts.
My problems are specific to Haskell.

> You see, the reason that
> books cost money is (should be) high quality content. :)
>


Manlio

Achim Schneider

unread,
Mar 25, 2009, 8:40:57 AM3/25/09
to haskel...@haskell.org
"Alberto G. Corona " <agoc...@gmail.com> wrote:

> However, reusability of source code and maintainability has never
> been taken seriously by haskell programmers, simply because there are
> no industrial projects in Haskell with dozens of people with
> different skills that come and go.
>

Now that's a claim.

In the sense that I don't do commercial haskell coding, but know what
maintainability is, anyway: I've maintained everything from utterly
atrocious to mindboggingly elegant java code for a living. I can tell
you with 110% confidence that maintainability is about composibility,
_on_every_level_: Not just on statement-level. Otherwise, I wouldn't
have cussed that much. Curiously enough, as soon as the code
didn't make you whince, it was easily maintainable. This is closely
related to Linus' observation that good [imperative] code is
data-structure centred, and Greenspun's Tenth Rule.


With Haskell, there's finally a language that makes large-scale changes
as easy as small-scale changes without having to resort to implement an
interpreter for a functional language. As the position of changes tends
to travel upwards in a bottom-up approach and small-scale changes are
easy to pull off (you already understand what you need to do since
otherwise you wouldn't have identified the need for a small-scale
change and continued to add onion layers), caring about editability on
function level just doesn't pay off.


That's why I don't care whether or not I have to re-write a whole
function to change it: If it's going to change, which isn't all that
likely, I can cope with renaming it and write another say 160
characters directly below it. Adding a proper quickcheck property (if
it didn't exist, yet, or the semantics changed) is usually more work:
You don't only need to get the preposition right, but also the test
case generator.


--
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.

Manlio Perillo

unread,
Mar 25, 2009, 8:51:46 AM3/25/09
to wren ng thornton, Haskell Cafe
wren ng thornton ha scritto:
> Manlio Perillo wrote:
> [...]

> Following directly from the Rule of Least Power, if you can get away
> with foreach then that's what you should use. Why? Because the less
> power the construct has, the fewer corner cases and generalizations a
> reader of the code needs to consider. Now, just because iterators exist
> does not mean that one should never use the more general tool. If you're
> fighting to break out of your chosen straitjacket, then chances are it's
> the wrong one to use in the first place; it'd be clearer to use more
> power and have less fighting.
>

> [...]


Note that, as I have already written, I agree with you.
And this is one of the reasons i like Haskell.

The main problem, here, is that:
- recursion and pattern matching are explained in every tutorial about
functional programming and Haskell.

This is the reason why I find them more "natural".

- high level, Haskell specific, abstractions, are *not* explained in
normal tutorials or books.
The libraries where these concepts are implemented, are not well
documented.
Most of the "documentation" is in research papers, and a "normal"
programmer don't want to read these papers.

Only in the recent "Real World Haskell", all these high level
abstraction have been properly documented


Manlio

Claus Reinke

unread,
Mar 25, 2009, 9:37:41 AM3/25/09
to Haskell Cafe mailing list

The beauty of functional programming is that there doesn't have
to be a conflict between those who prefer explicit and those
who prefer implicit recursion. Think of them as different views
on the same functions - just as with graphical visualizations,
pick the view best suited to your purpose and use equational
reasoning to transform one view into another, as needed.

Improving your experience in reasoning about code is going to
help at every level of abstraction, and since you've already
paid the price (using a pure language, to make reasoning easier)
you might as well avail yourself of the facilities;-)

While developing, I might prefer abstraction, as fewer details
mean that I can hold more of the problem in my head at any
point, increasing my chances of seeing all the way to a
solution; if optimizing, or when I haven't found the right
abstractions yet, I might have to resort to less abstract code
until I've sorted out those details or until GHC deals with
the more abstract forms as well as with the less abstract ones.

Fine, you say, but I'd never would have thought of abstract
views like splitAt as a state transformer. Okay, before this
thread, I might not have thought of using that, either. But
after this thread, I'd hope for it to become part of my
thinking about Haskell code. And the way I do that is by
taking the abstract code and unfold it (replacing instances
of left-hand sides with instances of right-hand sides of
definitions - the source links in the Haddock documentation
are very useful for that) until I get to some less abstract
code that I might have come up with.

That doesn't mean that I'd have had the insights to play the
derivation backwards, but by breaking the transformation from
less abstract to more abstract view into smaller steps, starting
from the abstract form that incorporates the additional insights
I was missing, I can increase my understanding of what is going
on, and my chances of noticing the opportunities next time. It
also confirms whether or not the two solutions really are the
same (as has been pointed out, that wasn't the case here).

Paraphrasing and tweaking Sjur Gjųstein Karevoll's remark
a little: clever Perl code is what you hope you understood in
the past, when you wrote it; clever Haskell code is what you
hope you'll understand in the future, when you'll write it yourself!-)

The derivation below is best followed by replaying it yourself
in your editor, but I hope you'll find it helpful anyway.

Claus

-- view transformation: reducing the level of abstraction
-- by turning implicit to explict recursion

takeList = evalState . mapM (State . splitAt)

-- unfold 'mapM'

takeList = evalState . sequence . map (State . splitAt)

-- unfold 'sequence'

takeList = evalState . foldr k (return []) . map (State . splitAt)
where k m m' = do x<-m; xs<-m'; return (x:xs)
foldr op n [] = n
foldr op n (h:t) = h `op` foldr op n t

-- specialize 'foldr' for the call paramenters 'k' and 'return []'

takeList = evalState . foldrkn . map (State . splitAt)
where k m m' = do x<-m; xs<-m'; return (x:xs)
foldrkn [] = return []
foldrkn (h:t) = h `k` foldrkn t

-- unfold 'k'

takeList = evalState . foldrkn . map (State . splitAt)
where foldrkn [] = return []
foldrkn (h:t) = do x<-h; xs<-foldrkn t; return (x:xs)

-- foldr op n . map f = foldr (op.f) n

takeList = evalState . foldrkn
where foldrkn [] = return []
foldrkn (h:t) = do x<-State (splitAt h); xs<-foldrkn t; return (x:xs)

-- unfold 'return' for 'State', eta-expand 'splitAt h'

takeList = evalState . foldrkn
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State (\s->(x:xs,s))

-- eta-expand body of 'takeList'

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = do x<-State (\s->splitAt h s); xs<-foldrkn t; State (\s->(x:xs,s))

-- unfold the second '>>=' for 'State'

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = do x<-State (\s->splitAt h s)
State (\s->let (xs,s') = runState (foldrkn t) s
in runState (State (\s->(x:xs,s))) s')

-- runState . State = id

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = do x<-State (\s->splitAt h s)
State (\s->let (xs,s') = runState (foldrkn t) s
in (\s->(x:xs,s)) s')

-- beta-reduce

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = do x<-State (\s->splitAt h s)
State (\s->let (xs,s') = runState (foldrkn t) s
in (x:xs,s'))

-- unfold the remainign '>>=' for 'State'

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = State (\s->let (x,s') = runState (State (\s->splitAt h s)) s
in runState (State (\s->let (xs,s') = runState (foldrkn t) s
in (x:xs,s'))) s')

-- runState . State = id (2x)

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = State (\s->let (x,s') = (\s->splitAt h s) s
in (\s->let (xs,s') = runState (foldrkn t) s
in (x:xs,s')) s')

-- beta-reduce (2x)

takeList ns xs = evalState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = State (\s->let (x,s') = splitAt h s
in let (xs,s'') = runState (foldrkn t) s'
in (x:xs,s''))

-- unfold 'evalState'

takeList ns xs = fst $ runState (foldrkn ns) xs
where foldrkn [] = State (\s->([],s))
foldrkn (h:t) = State (\s->let (x,s') = splitAt h s
in let (xs,s'') = runState (foldrkn t) s'
in (x:xs,s''))

-- all calls to 'foldrkn' are wrapped in 'runState', bring it into the definition

takeList ns xs = fst $ (foldrkn ns) xs
where foldrkn [] = runState $ State (\s->([],s))
foldrkn (h:t) = runState $ State (\s->let (x,s') = splitAt h s
in let (xs,s'') = (foldrkn t) s'
in (x:xs,s''))

-- runState . State = id (2x)

takeList ns xs = fst $ (foldrkn ns) xs
where foldrkn [] = \s->([],s)
foldrkn (h:t) = \s->let (x,s') = splitAt h s
in let (xs,s'') = (foldrkn t) s'
in (x:xs,s'')

-- clean up

takeList ns xs = fst (foldrkn ns xs)
where foldrkn [] s = ([],s)
foldrkn (h:t) s = let (x,s') = splitAt h s
(xs,s'') = foldrkn t s'
in (x:xs,s'')

-- 'snd (foldrkn _ _)' is never used, remove it

takeList ns xs = foldrkn ns xs
where foldrkn [] s = []
foldrkn (h:t) s = let (x,s') = splitAt h s
xs = foldrkn t s'
in x:xs

-- remove indirection

takeList [] s = []
takeList (h:t) s = x : takeList t s'
where (x,s') = splitAt h s

Zachary Turner

unread,
Mar 25, 2009, 9:45:03 AM3/25/09
to wren ng thornton, Haskell Cafe
On Tue, Mar 24, 2009 at 10:32 PM, wren ng thornton <wr...@freegeek.org>wrote:

> Both of these conclusions seem quite natural to me, even from before
> learning Haskell. It seems, therefore, that "naturality" is not the proper
> metric to discuss. It's oft overlooked, but the fact is that expressivity
> comes not from more formal power, but from _less_.
>

> * Natural language has a limited range of words and syntactic constructs,
> but gives the larger-enough building blocks to enable unconstrained
> communication; whereas a language with a unique word for every utterance
> (arguably simpler) is impossible to learn.


On the other hand, -certain- languages are more expressive than others. As
an example, I personally find English far more expressive than both
Vietnamese and Japanese, yet English is far more complicated. Japanese, for
example, has exactly 1 pronunciation for each "alphabet letter". Hence
you'll never find words in English like "lead" and "lead", where the first
means to guide someone or something, or to give direction, and the second is
a chemical element. Words that are spelled the same in Japanese are
pronounced the same 100% of the time. Furthermore, I find that you are far
more limited in your choices of how to form ideas into sentences. In
English there might be 20 different ways to phrase the exact same sentence
for use in a certain context, where the sentences end up being almost
identical with the exception of 1 or 2 words changed or shuffled around. In
Japanese there would probably be far fewer. In Vietnamese there's a similar
problem, in that there are not very many synonyms at all, and NO
conjugations. It is complicated by the fact that it's a tonal language, but
on the other hand the tonality independent of the expressivity in my
experience. Similar to Chinese, although I can't speak for the expressivity
of Chinese I would not be surprised at all if written Chinese was extremely
expressive, but spoken not so much.

Anyway the point of all this is that in English you have more freedom and
more power, and hence you use (abuse?) the syntax of the language to create
words, sentences, and phrases that express very powerful things.
Furthermore, they are things that almost all English speakers would be able
to grasp the full meaning of what you've said.

Achim Schneider

unread,
Mar 25, 2009, 9:47:44 AM3/25/09
to haskel...@haskell.org
Manlio Perillo <manlio_...@libero.it> wrote:

> The main problem, here, is that:
> - recursion and pattern matching are explained in every tutorial about
> functional programming and Haskell.
>
> This is the reason why I find them more "natural".
>

Well, you're going to have a hard time writing a BASIC tutorial without
mentioning GOTO. While surely it has to be there, for the sake of
completeness of fundamentals, I completely agree that...

> - high level, Haskell specific, abstractions, are *not* explained in
> normal tutorials or books.
> The libraries where these concepts are implemented, are not well
> documented.
> Most of the "documentation" is in research papers, and a "normal"
> programmer don't want to read these papers.
>
> Only in the recent "Real World Haskell", all these high level
> abstraction have been properly documented
>

..GOTO alone doesn't teach you how to write a loop, trust me, I was
stuck there for a good while, eons ago.

The prelude, as well as commonly used functions that should be in
there, utterly lack accompanying documentation. There should be no
function without a usage, as in foldl/sum/product, and no usage without
explanation why foldl is chosen over foldl' and foldr. Think of a
Preludopedia, accompanying the Typeclassopedia: Documentation where you
don't have to snatch understanding out of assem^H^H^H^H^H^H code
written using primitive recursion, to make it a bit easier to see the
wood despite of all those trees.

PS: Shouldn't zipWith be defined in terms of zip, uncurry and map
instead of
zipWith f (a:as) (b:bs) = f a b : zipWith f as bs
zipWith _ _ _ = []
?

--
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.

Simon Marlow

unread,
Mar 25, 2009, 11:10:23 AM3/25/09
to Dan Piponi, Miguel Mitrofanov, Haskell Cafe mailing list
Dan Piponi wrote:
>> Miguel Mitrofanov wrote:
>>> takeList = evalState . mapM (State . splitAt)
>
>> However, ironically, I stopped using them for pretty
>> much the same reason that Manlio is saying.
>
> Are you saying there's a problem with this implementation? It's the
> only one I could just read immediately. The trick is to see that
> evalState and State are just noise for the type inferencer so we just
> need to think about mapM splitAt. This turns a sequence of integers
> into a sequence of splitAts, each one chewing on the leftovers of the
> previous one. *Way* easier than both the zipWith one-liner and the
> explicit version. It says exactly what it means, almost in English.

But it only works out nicely because the ordering of the components of the
pair returned by splitAt matches the ordering that the state monad expects
(and I can never remember which way around they are in Control.Monad.State).

Try doing it with mapAccumL, which is arguably the right abstraction, but
has the components the other way around.

Cheers,
Simon

Jonathan Cast

unread,
Mar 25, 2009, 11:26:07 AM3/25/09
to Simon Marlow, Haskell Cafe mailing list, Miguel Mitrofanov
On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
> the ordering that the state monad expects
> (and I can never remember which way around they are in Control.Monad.State).

Really? I found it obvious once I figured out it how simple it made
(>>=). With the order from Control.Monad.State (with constructors
ignored):

a >>= f = \ s -> case s a of
(x, s') -> f x s'

Reversing the order of the components of the result gives you

a >>= f = \ s -> case s a of
(s', x) -> f x s'

which just looks weird.

> Try doing it with mapAccumL, which is arguably the right abstraction,
> but
> has the components the other way around.

Define

swap (a, b) = (b, a)

You'll never worry about the order of components of a pair again. This
function is as indispensable as flip.

jcc

Simon Marlow

unread,
Mar 25, 2009, 11:32:55 AM3/25/09
to Jonathan Cast, Haskell Cafe mailing list, Miguel Mitrofanov
Jonathan Cast wrote:
> On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
>> the ordering that the state monad expects
>> (and I can never remember which way around they are in Control.Monad.State).
>
> Really? I found it obvious once I figured out it how simple it made
> (>>=). With the order from Control.Monad.State (with constructors
> ignored):
>
> a >>= f = \ s -> case s a of
> (x, s') -> f x s'
>
> Reversing the order of the components of the result gives you
>
> a >>= f = \ s -> case s a of
> (s', x) -> f x s'
>
> which just looks weird.

It might look weird to you, but that's the way that GHC's IO and ST monads
do it. It looks perfectly natural to me! (and you have the a and s the
wrong way around in 'case s a', BTW).

>> Try doing it with mapAccumL, which is arguably the right abstraction,
>> but
>> has the components the other way around.
>
> Define
>
> swap (a, b) = (b, a)

ew, that's far too crude. I think you mean

swap = uncurry $ flip (,)

Cheers,
Simon

Peter Verswyvelen

unread,
Mar 25, 2009, 11:35:34 AM3/25/09
to Simon Marlow, Haskell Cafe mailing list, Miguel Mitrofanov
On Wed, Mar 25, 2009 at 4:09 PM, Simon Marlow <marl...@gmail.com> wrote:

> But it only works out nicely because the ordering of the components of the
> pair returned by splitAt matches the ordering that the state monad expects
> (and I can never remember which way around they are in Control.Monad.State).
>

Now you mention this, I often had to write a little function

swap :: (a,b) -> (b,a)

It seems many other authors have done the same in their own modules. Maybe
this should be part of the Prelude?

Gregg Reynolds

unread,
Mar 25, 2009, 11:41:10 AM3/25/09
to Zachary Turner, Haskell Cafe
2009/3/25 Zachary Turner <diviso...@gmail.com>:

>
> On the other hand, -certain- languages are more expressive than others.  As
> an example, I personally find English far more expressive than both
> Vietnamese and Japanese, yet English is far more complicated.  Japanese, for

Way off topic, but for what it's worth, you can take it as axiomatic
that all natural languages are equally expressive, qua languages.
They're also equally easy/hard overall. The areas of difficulty are
just in different places. Japanese grammar is extraordinarily simple,
but achieving mastery of the spoken language *in Japanese society* is
next to impossible, because usage reflects social constructions. As
you no doubt know, what is not said is sometimes just as expressive as
what is said in Japanese; very maddening to a logorrheic American,
just as an English speaker's need to explicitly articulate
*everything* is no doubt annoying to Japanese.

Regarding spelling and phonology: the idea that "one symbol, one
sound" is somehow optimal is the Myth That Will Not Die. None other
than Chomsky himself argued that English orthography is near-optimal
for the English language. All writing systems are designed to serve
speakers of the language, and many languages are poorly modeled by a
one symbol, one sound system.

I'm not sure there's a lesson there for formal language designers and
programmers, except maybe that the expressiveness (elegance?) of a
text usually depends to a great extent on the writer more than the
language.

-g

Robin Green

unread,
Mar 25, 2009, 11:51:38 AM3/25/09
to haskel...@haskell.org
On Wed, 25 Mar 2009 08:25:40 -0700
Jonathan Cast <jonath...@fastmail.fm> wrote:

> Define
>
> swap (a, b) = (b, a)

By the way, if you want to be "too smart", there's a generalised
version of swap in Control.Category.Braided in the category-extras
package.

That might be a bit overkill though.
--
Robin

Jonathan Cast

unread,
Mar 25, 2009, 12:15:22 PM3/25/09
to Robin Green, haskel...@haskell.org
On Wed, 2009-03-25 at 03:01 +0000, Robin Green wrote:
> On Wed, 25 Mar 2009 08:25:40 -0700
> Jonathan Cast <jonath...@fastmail.fm> wrote:
>
> > Define
> >
> > swap (a, b) = (b, a)
>
> By the way, if you want to be "too smart", there's a generalised
> version of swap in Control.Category.Braided in the category-extras
> package.

Thanks, I'll check it out.

> That might be a bit overkill though.

What is this word `overkill' you use?

jcc

David Menendez

unread,
Mar 25, 2009, 12:17:13 PM3/25/09
to Simon Marlow, Miguel Mitrofanov, Haskell Cafe mailing list
On Wed, Mar 25, 2009 at 11:32 AM, Simon Marlow <marl...@gmail.com> wrote:
> Jonathan Cast wrote:
>>
>> Define
>>
>>    swap (a, b) = (b, a)
>
> ew, that's far too crude.  I think you mean
>
>  swap = uncurry $ flip (,)

On the theme of using monads where you might not expect,

swap = liftA2 (,) snd fst

--
Dave Menendez <da...@zednenem.com>
<http://www.eyrie.org/~zednenem/>

Jonathan Cast

unread,
Mar 25, 2009, 12:17:42 PM3/25/09
to Simon Marlow, Haskell Cafe mailing list, Miguel Mitrofanov
On Wed, 2009-03-25 at 15:32 +0000, Simon Marlow wrote:
> Jonathan Cast wrote:
> > On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
> >> the ordering that the state monad expects
> >> (and I can never remember which way around they are in Control.Monad.State).
> >
> > Really? I found it obvious once I figured out it how simple it made
> > (>>=). With the order from Control.Monad.State (with constructors
> > ignored):
> >
> > a >>= f = \ s -> case s a of
> > (x, s') -> f x s'
> >
> > Reversing the order of the components of the result gives you
> >
> > a >>= f = \ s -> case s a of
> > (s', x) -> f x s'
> >
> > which just looks weird.
>
> It might look weird to you, but that's the way that GHC's IO and ST monads
> do it. It looks perfectly natural to me!

Right. Consider this an argument for fixing IO/ST(/STM?) to conform to
the self-evidently correct ordering of Control.Monad.State :)

> (and you have the a and s the
> wrong way around in 'case s a', BTW).

Um, yes. /Mea culpa/.

> >> Try doing it with mapAccumL, which is arguably the right abstraction,
> >> but
> >> has the components the other way around.
> >
> > Define
> >
> > swap (a, b) = (b, a)
>
> ew, that's far too crude. I think you mean
>
> swap = uncurry $ flip (,)

Ah, yes.

jcc

Conal Elliott

unread,
Mar 25, 2009, 2:28:15 PM3/25/09
to David Menendez, Simon Marlow, Haskell Cafe mailing list, Miguel Mitrofanov
or via Arrow:

swap = snd &&& fst

On Wed, Mar 25, 2009 at 9:16 AM, David Menendez <da...@zednenem.com> wrote:

> On Wed, Mar 25, 2009 at 11:32 AM, Simon Marlow <marl...@gmail.com> wrote:
> > Jonathan Cast wrote:
> >>
> >> Define
> >>
> >> swap (a, b) = (b, a)
> >
> > ew, that's far too crude. I think you mean
> >
> > swap = uncurry $ flip (,)
>
> On the theme of using monads where you might not expect,
>
> swap = liftA2 (,) snd fst
>
> --
> Dave Menendez <da...@zednenem.com>

> <http://www.eyrie.org/~zednenem/ <http://www.eyrie.org/%7Ezednenem/>>

wren ng thornton

unread,
Mar 25, 2009, 3:19:31 PM3/25/09
to Haskell Cafe
Thomas Hartman wrote:
> sorry, wrong function.
>
> should be
>
> partitions [] xs = []
> partitions (n:parts) xs =
> let (beg,end) = splitAt n xs
> in beg : ( case end of
> [] -> []
> xs -> partitions parts xs)


It's not tail recursive, FWIW. The recursive expression has (:) as it's
head before it hits `partitions`. It is however nicely coinductive,
which has other good properties.

We could make it tail-recursive easily,

partitions = go id
where
go k [] xs = k []
go k (n:ns) xs =


let (beg,end) = splitAt n xs

k' = k . (beg:)
in case end of
[] -> k' []
xs' -> go k' ns xs'

(Note how this version has `go` as the head of the recursive expression.)

..however this version has different strictness properties. In
particular, let both input lists be infinite (and take a finite portion
of the result). The original version works fine because it gives a
little bit of output (beg:) at each step of the recursion ---which is
all "coinductive" means. The tail-recursive version hits _|_ however,
because we've delayed giving any input (k []) until one of the two lists
hits [] ---we've tried doing induction on co-data and so we hit an
infinite loop.

This dichotomy between coinduction and tail-recursion is quite common.
It's another example of the recently discussed problem of defining foldr
in terms of foldl. Whether the termination differences matter depends on
how the function is to be used.


Another nice property of coinduction is that it means we can do
build/fold fusion easily:

partitions = \ns xs -> build (\cons nil -> go cons nil ns xs)
where
go cons nil = go'
where
go' [] xs = nil
go' (n:ns) xs =


let (beg,end) = splitAt n xs

in beg `cons` case end of
[] -> nil
xs' -> go' ns xs'

By using the GHC.Exts.build wrapper the fusion rules will automatically
apply. The second wrapper, go, is just so that the worker, go', doesn't
need to pass cons and nil down through the recursion.

--
Live well,
~wren

Dan Weston

unread,
Mar 25, 2009, 3:40:42 PM3/25/09
to wren ng thornton, Haskell Cafe
So to be clear with the terminology:

inductive = good consumer?
coinductive = good producer?

So fusion should be possible (automatically? or do I need a GHC rule?) with
inductive . coinductive

Or have I bungled it?

Dan

wren ng thornton wrote:
> Thomas Hartman wrote:
>> sorry, wrong function.
>>
>> should be
>>
>> partitions [] xs = []
>> partitions (n:parts) xs =
>> let (beg,end) = splitAt n xs
>> in beg : ( case end of
>> [] -> []
>> xs -> partitions parts xs)
>
>
> It's not tail recursive, FWIW. The recursive expression has (:) as it's
> head before it hits `partitions`. It is however nicely coinductive,
> which has other good properties.
>
> We could make it tail-recursive easily,
>
> partitions = go id
> where
> go k [] xs = k []
> go k (n:ns) xs =
> let (beg,end) = splitAt n xs
> k' = k . (beg:)
> in case end of
> [] -> k' []
> xs' -> go k' ns xs'
>
> (Note how this version has `go` as the head of the recursive expression.)
>

> ...however this version has different strictness properties. In

_______________________________________________

Thomas Hartman

unread,
Mar 25, 2009, 3:44:32 PM3/25/09
to Dan Piponi, Miguel Mitrofanov, Haskell Cafe mailing list
> Are you saying there's a problem with this implementation? It's the

Yes, there is actually a problem with this implementation.

import Data.List
import Control.Monad.State
import Debug.Trace.Helpers


partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)

partitionsSimpleStupidGood = partitions

partitionsTooFrickinClever = evalState . mapM (State . splitAt)

testP pf = mapM_ putStrLn [
show . pf [3,7..] $ [1..10]
, show . pf [3,7,11,15] $ [1..]
, show . head . last $ pf [3,3..] [1..10^6]
]

*Main> testP partitionsSimpleStupidGood
testP partitionsSimpleStupidGood^J[[1,2,3],[4,5,6,7,8,9,10]]
[[1,2,3],[4,5,6,7,8,9,10],[11,12,13,14,15,16,17,18,19,20,21],[22,23,24,25,26,27,28,29,30,31,32,33,34,35,36]]
1000000

Now try testP partitionsTooFrickinClever

Now, I am sure there is a fix for whatever is ailing the State monad
version, and we would all learn a lesson from it about strictness,
laziness, and the State monad.

However, there is something to be said for code that just looks like a
duck and quacks like a duck. It's less likely to surprise you.

So... I insist... Easy for a beginner to read == better!


2009/3/24 Dan Piponi <dpi...@gmail.com>:


>> Miguel Mitrofanov wrote:
>>> takeList = evalState . mapM (State . splitAt)
>
>> However, ironically, I stopped using them for pretty
>> much the same reason that Manlio is saying.
>
> Are you saying there's a problem with this implementation? It's the
> only one I could just read immediately. The trick is to see that
> evalState and State are just noise for the type inferencer so we just
> need to think about mapM splitAt. This turns a sequence of integers
> into a sequence of splitAts, each one chewing on the leftovers of the
> previous one. *Way* easier than both the zipWith one-liner and the
> explicit version. It says exactly what it means, almost in English.

> --
> Dan

Dan Weston

unread,
Mar 25, 2009, 3:49:44 PM3/25/09
to Thomas Hartman, Haskell Cafe mailing list, Miguel Mitrofanov

> However, there is something to be said for code that just looks like a
> duck and quacks like a duck. It's less likely to surprise you.
>
> So... I insist... Easy for a beginner to read == better!

All you have said is that one building a skyscraper will need
scaffolding, blueprints, and a good building inspector. The intended
inhabitants of that skyscraper will not want to stare out at scaffolding
for the rest of their lives.

Put the simple version in a QuickCheck predicate. That is the ideal
place for it.

All the better if through this process we all learn a lesson about

strictness, laziness, and the State monad.

Dan

Thomas Hartman

unread,
Mar 25, 2009, 3:55:56 PM3/25/09
to Haskell Cafe mailing list
Oh, and incidentally, if you change to Control.Monad.State.Strict

*Main> testP partitionsTooFrickinClever
testP partitionsTooFrickinClever^J*** Exception: stack overflow

Don't get me wrong -- I have learned a lot from this thread, and I
think it would be really cool if there was a way to do this that is
clever, that is *right*.

But since the original point was about style, I think this underscores
the point that good style should be newbie friendly *if possible*.
Especially since being a newbie in haskell isn't like in other
languages -- might mean you have been using it for years as a hobby,
but just don't have comfort in certain monads and idioms.


2009/3/25 Thomas Hartman <tphy...@gmail.com>:

Thomas Hartman

unread,
Mar 25, 2009, 4:01:40 PM3/25/09
to Haskell Cafe mailing list
Since this thread is ostensibly about haskell style, it should also be
about haskell style *today*.

As I think Yitz noted earlier, this is a moving target.

Adoption of haskell by the masses -- moving.
Skill of haskell hordes -- moving.
Abstractions available as part of "idiomatic haskell", and correctness
of these abstractions, as the state monad for partitions cockup shows
-- also moving.

2009/3/25 Jonathan Cast <jonath...@fastmail.fm>:


> On Wed, 2009-03-25 at 12:48 -0700, Dan Weston wrote:
>> > However, there is something to be said for code that just looks like a
>>  > duck and quacks like a duck. It's less likely to surprise you.
>>  >
>>  > So... I insist... Easy for a beginner to read == better!
>>
>> All you have said is that one building a skyscraper will need
>> scaffolding, blueprints, and a good building inspector. The intended
>> inhabitants of that skyscraper will not want to stare out at scaffolding
>> for the rest of their lives.
>

> +1
>
> jcc

Thomas Hartman

unread,
Mar 25, 2009, 4:13:25 PM3/25/09
to Haskell Cafe mailing list
Not only is your "simpler" function easier to read, it is also more correct.

partitionsHubris xs ns = zipWith take ns . init $ scanl (flip drop) xs ns

partitionsBeginner :: [Int] -> [a] -> [[a]]
partitionsBeginner [] _ = []
partitionsBeginner _ [] = []
partitionsBeginner (n : ns) xs = head : partitionsBeginner ns tail


where (head, tail) = splitAt n xs

Run both through testP to see why,.

testP pf = mapM_ putStrLn [
show . pf [3,7..] $ [1..10]
, show . pf [3,7,11,15] $ [1..]
, show . head . last $ pf [3,3..] [1..10^6]
]

Of course, I favor

partitions [] xs = []
partitions (n:parts) xs =
let (beg,end) = splitAt n xs
in beg : ( case end of
[] -> []
xs -> partitions parts xs)

which to my eyes is even easier to read (and also correct).

Pattern matching is awesome language feature. use it!

Thomas Hartman

unread,
Mar 25, 2009, 4:21:33 PM3/25/09
to Haskell Cafe mailing list
s/Pattern matching is awesome language feature. use it!
/Pattern matching is awesome language feature. Don't be ashamed to use it! /

:)


2009/3/25 Thomas Hartman <tphy...@gmail.com>:

wren ng thornton

unread,
Mar 25, 2009, 4:22:02 PM3/25/09
to Haskell Cafe
Manlio Perillo wrote:
> The main problem, here, is that:
> - recursion and pattern matching are explained in every tutorial about
> functional programming and Haskell.
>
> This is the reason why I find them more "natural".
>
> - high level, Haskell specific, abstractions, are *not* explained in
> normal tutorials or books.
> The libraries where these concepts are implemented, are not well
> documented.

This latter point is indeed the problem. But it may be worth rephrasing
a bit. The big problem with the Haskell tutorials I've seen is that they
aim to teach orthodoxy rather than orthopraxy. Or to put it less
religiously, they teach the nuts and bolts of how the language is
_constructed_, instead of teaching the idioms and ideas of how the
language is _used_. It's like learning C from Kernighan&Ritchie ---a
fine book, don't get me wrong, but it teaches the words of the language
instead of the community of the speakers. If you've memorized K&R you're
still a novice C programmer.

Given our history, this approach made sense. Haskell's been around for a
long time, but most of that history has been in academia where it's
assumed that people will know what to do if only they knew how to do it.
More recently Haskell has been moving from academic toy to industrial
tool, and that shift necessitates a similar shift from teaching the
language as a collection of interesting features to teaching the
language as a collection of interesting libraries. History hinders this
transition--- both the internal history of those who know Haskell (and
thus can teach it but only as they know it), and also the external
history of the mainstream which understands imperativistic thinking but
not functional declarative thinking (and thus we must teach the features
in order to give the understanding necessary for teaching the libraries).


Recently Galois has been focusing on developing the infrastructure
necessary for having easy access to libraries. To this day CPAN is the
reason why Perl is one of the best languages out there. Other languages
have tried emulating that repository, but the only one I've seen that
has the community necessary to make it fly has been Hackage; and the
development of Cabal/Hackage is very recent and still very bleeding edge
(with the scars to prove it). With Galois' support, I think most
Haskellers are aware of Hackage now, however it still hasn't made it
into the tutorials in the same way that CPAN is integral to the teaching
of Perl.

Real World Haskell is another groundbreaking, but recent, development.
It's a great book in itself and groundbreaking for embracing
open-development in the publishing industry, but it's also the first of
this shift from teaching Haskell = Patterns + Recursion + Laziness +
Class to teaching modern Haskell in a more holistic community-oriented
way. It's worth reiterating that RWH was not the cause of the shift in
the community, but is rather a result of the ongoing shift. The
Typeclassopedia is another drop in this river: excellent, recent.

So I agree that most of the tutorials are lagging behind the modern form
of Haskell, but I think this is due in part to a very recent change in
the growth and direction of the community. As always with avoiding
success at all costs, whether we end up the better for it in the end
will depend on holding onto enough newcomers who have only ever known
this modern Haskell, because they are the ones who will have the proper
perspective to write tutorials and teach the language as if it's always
been this way. You must be the change you wish to see in the world.


> Most of the "documentation" is in research papers, and a "normal"
> programmer don't want to read these papers.

Yes, and no. There is quite a bit of documentation in research papers,
and mainstream programmers don't read research. However, this is a big
part of what makes the Haskell community what it is. There are plenty of
non-academics here, but they have the willingness to read these papers
(even if it's out of the ordinary) and the desire to learn radical new
things (because they're out of the ordinary). A good deal of the papers
these days are eminently readable by the laity, moreso than other
research papers in computer science or programming languages IMO.

This is one of the big things that separates Haskell from the
mainstream, but it's not something I see going away any time soon. Given
the recent surge of interest from the mainstream, I think it's finally
time that we take a more proactive approach in trying to teach this
aspect as one of the tenants of our community. Presently there's still a
"take it or leave it" tenor to these discussions, and that needs to be
dispelled before it poisons the relations between the old guard and the
young turks. New tutorials need to find some way of introducing
non-academics to the idea that the academy is not an ivory tower and
that part of what makes Haskell cool is the fact that it takes these
theoretical ideas and applies them to the real world. It's challenging
to fight anti-intellectualism anywhere, but it's become apparent to me
that this is something that we need to develop an organized response to.

--
Live well,
~wren

wren ng thornton

unread,
Mar 25, 2009, 5:57:37 PM3/25/09
to Haskell Cafe
Dan Weston wrote:
> So to be clear with the terminology:
>
> inductive = good consumer?
> coinductive = good producer?
>
> So fusion should be possible (automatically? or do I need a GHC rule?) with
> inductive . coinductive
>
> Or have I bungled it?

Not quite. Induction means starting from base cases and building things
upwards from those. Coinduction is the dual and can be thought of as
starting from the ceiling and building your way downwards (until you hit
the base cases, or possibly forever).

So, if you have potentially infinite data (aka co-data) coming in, then
you need to use coinduction because you may never see the basis but you
want to make progress anyways. In formal terms, coinduction on co-data
gives the same progress guarantees as induction on data, though
termination is no longer a conclusion of progress (since coinduction may
produce an infinite output from infinite input).

Haskell doesn't distinguish data and co-data, but you can imagine data
as if all the data constructors are strict, and co-data as if all the
constructors are lazy. Another way to think of it is that finite lists
(ala OCaml and SML) are data, but streams are co-data.

For fusion there's the build/fold type and its dual unfold/destroy,
where build/unfold are producers and fold/destroy are consumers. To
understand how fusion works, let's look at the types of build and fold.

GHC.Exts.build :: (forall b. (a -> b -> b) -> b -> b) -> [a]
flip (flip . foldr) :: [a] -> ( (a -> b -> b) -> b -> b )

Together they give an isomorphism between lists as an ADT [a] and as a
catamorphism (forall b. (a -> b -> b) -> b -> b), aka Church encoding.
When we have build followed by foldr, we can remove the intermediate
list and pass the F-algebra down directly:

foldr cons nil (build k) = k cons nil

For unfold/destroy fusion the idea is the same except that we use unfold
(an anamorphism on the greatest fixed point) instead of fold (a
catamorphism on the least fixed point). The two fixed points coincide in
Haskell.

Since Haskell does build/fold fusion, "good producer" requires that the
function was written using build, and "good consumer" requires it's
written using foldr. Using these functions allows us to apply the rule,
though it's not sufficient for "good fusion". Why the functions have the
particular types they do and why this is safe has to do with induction
and coinduction, but the relationship isn't direct.

The reason a coinductive function is easy to make into a good producer
has to do with that relationship. Take a canonically coinductive
function like

f [] = []
f (x:xs) = x : f xs

Once we've made one step of recursion, we've generated (x:) and then
have a thunk for recursing. Most importantly is that no matter how we
evaluate the rest of the list, the head of the return value is already
known to be (:) thus we can get to WHNF after one step. Whatever
function is consuming this output can then take x and do whatever with
it, and then pull on f xs which then takes a single step and returns
(x':) along with a thunk f xs'. Because all of those (:) are being
produced immediately, it's easy to abstract it out as a functional
argument--- thus we can use build.

Coinduction doesn't need to do 1-to-1 mapping of input to output, there
just needs to be the guarantee that we only need to read a finite amount
of input before producing a non-zero finite amount of output. These
functions are also coinductive:

p [] = []
p [x] = [x]
p (x:y:ys) = y : x : p ys

q [] = []
q [x] = []
q (x:y:ys) = y : q ys

r [] = []
r (x:xs) = x : x : r xs

They can also be written using build, though they're chunkier about
reading input or producing output. These functions are not coinductive
because there's no finite bound on how long it takes to reach WHNF:

bad [] = []
bad (x:xs) = bad xs

reverse [] = []
reverse (x:xs) = reverse xs ++ [x]

Because build/fold is an isomorphism, we can technically use build for
writing *any* function that produces a list. However, there's more to
fusion than just using the build/fold isomorphism. The big idea behind
it all is that when producers and consumers are in 1-to-1 correlation,
then we can avoid allocating that 1 (the cons cell) and can just pass
the arguments of the constructor directly to the consumer. For example:

let buildF [] = []
buildF (x:xs) = x : buildF xs

consumeF [] = 0
consumeF (x:xs) = 1 + consumeF xs
in
consumeF . buildF
==
let buildF = \xs -> build (f xs)
where
f [] cons nil = nil
f (x:xs) cons nil = x `cons` f xs cons nil

consumeF = foldr consumeCons consumeNil
where
consumeNil = 0
consumeCons x rs = 1 + rs
in
consumeF . buildF
==
let f [] cons nil = nil
f (x:xs) cons nil = x `cons` f xs cons nil

consumeNil = 0
consumeCons x rs = 1 + rs
in
foldr consumeCons consumeNil . \xs -> build (f xs)
==
let... in
\xs -> foldr consumeCons consumeNil (build (f xs))
==
let... in
\xs -> (f xs) consumeCons consumeNil

And now f never allocates any (:) or [], it just calls the two consumers
directly. The first step of choosing to use build and foldr instead of
primitive recursion is what enables the compiler to automatically do all
the other steps.

Leaving it at that is cute since we can avoid allocating the list,
however, due to laziness we may still end up allocating a spine of calls
to consumeCons, which isn't much better than a spine of calls to (:).
This is why "good producers" are ones which are capable of producing a
single cons at a time, they never construct a spine before it is needed
by the consumer. And this is why "good consumers" are ones which are
capable of consuming a single cons at a time, they never force the
production of a spine without immediately consuming it. We can relax
this goodness from 1-to-1 to chunkier things, but that also reduces the
benefits of fusion.

All of this can be generalized to other types besides lists, of course.

--
Live well,
~wren

Dan Piponi

unread,
Mar 25, 2009, 6:12:20 PM3/25/09
to Thomas Hartman, Miguel Mitrofanov, Haskell Cafe mailing list
On Wed, Mar 25, 2009 at 12:44 PM, Thomas Hartman <tphy...@gmail.com> wrote:
>> Are you saying there's a problem with this implementation? It's the
>
> Yes, there is actually a problem with this implementation.

> However, there is something to be said for code that just looks like a


> duck and quacks like a duck. It's less likely to surprise you.

Well the problem here isn't that the code does something surprising.
It's author was making assumptions about the type of input that it's
going to get. I think that's an orthogonal issue.

> So... I insist... Easy for a beginner to read == better!

Not at all. Beginner list processing code can and often does go awry
when presented with infinite lists.

The moral here has nothing to do with readability by beginners. It's:
if the function you're defining could be extended naturally to
infinite lists, and it would be useful to do so, then make it do so.

Ryan Ingram

unread,
Mar 25, 2009, 6:41:38 PM3/25/09
to Jonathan Cast, Simon Marlow, Miguel Mitrofanov, Haskell Cafe mailing list
On Wed, Mar 25, 2009 at 8:25 AM, Jonathan Cast
<jonath...@fastmail.fm> wrote:
> On Wed, 2009-03-25 at 15:09 +0000, Simon Marlow wrote:
>> the ordering that the state monad expects
>> (and I can never remember which way around they are in Control.Monad.State).
>
> Really?  I found it obvious once I figured out it how simple it made
> (>>=).  With the order from Control.Monad.State (with constructors
> ignored):
>
>    a >>= f = \ s -> case s a of
>       (x, s') -> f x s'
>
> Reversing the order of the components of the result gives you
>
>    a >>= f = \ s -> case s a of
>        (s', x) -> f x s'
>
> which just looks weird.

However, if you are used to thinking in terms of type composition, s
-> (s, a) makes more sense, because it is effectively

(s ->) . (s,) . Identity

whose "functor-ness" is automatic via composition of functors:

newtype Identity a = Identity a
inIdentity f (Identity a) = Identity (f a)

instance Functor Identity where
fmap f = inIdentity f

instance Functor ((,) a) where
fmap f (a, x) = (a, f x)
instance Functor ((->) a) where
fmap f k a = f (k a)

newtype O f g x = O (f (g x))
inO f (O a) = O (f a)
instance (Functor f, Functor g) => Functor (O f g) where
fmap f = inO (fmap (fmap f))
-- or fmap = inO . fmap . fmap

-- not valid haskell, but if there were sections at the type level it would be.
type State s = (s ->) `O` (s,) `O` Identity

-- ryan

wren ng thornton

unread,
Mar 25, 2009, 6:44:45 PM3/25/09
to Haskell Cafe
Zachary Turner wrote:
> On Tue, Mar 24, 2009 at 10:32 PM, wren ng thornton <wr...@freegeek.org>wrote:
>
>> Both of these conclusions seem quite natural to me, even from before
>> learning Haskell. It seems, therefore, that "naturality" is not the proper
>> metric to discuss. It's oft overlooked, but the fact is that expressivity
>> comes not from more formal power, but from _less_.
>>
>> * Natural language has a limited range of words and syntactic constructs,
>> but gives the larger-enough building blocks to enable unconstrained
>> communication; whereas a language with a unique word for every utterance
>> (arguably simpler) is impossible to learn.
>
>
> On the other hand, -certain- languages are more expressive than others. As
> an example, I personally find English far more expressive than both
> Vietnamese and Japanese, yet English is far more complicated.

That's funny, I find Japanese to be far more expressive than English.
(The language itself. Due to familiarity, I myself am more expressive in
English.)

Japanese has sophisticated forms of address that indicate the distance,
degree, and style of the speaker's relationship with the listener. In
English we can get the point across but we don't have the formalism and
so it's all a lot more handwaving. Japanese can indicate topic and focus
directly; whereas English must resort to bold/italics or syntactic
contortions to be precise. Japanese has a wide assortment of pronouns
which imply measures of respect, arrogance, disdain, abashment, etc;
whereas English is limited to a small number that are only deictic.
Japanese has many postpositions which capture abstract comparative
relations that are difficult to express concisely in English. Japanese
sentential particles can express a wide range of affect; whereas English
must rely on intonation and context to determine whether something
should be interpreted as compassionate, bonding, insulting, ironic, etc.

Of course it all depends on what exactly you care to express. Japanese
has restricted phonology, as you mentioned, though this is only as
meaningful as the character set used for variable names in a programming
language. Japanese also lacks certain sophisticated distinctions in
English like definite vs indefinite articles, and singular vs plural vs
mass-count nouns.


> Words that are spelled the same in Japanese are
> pronounced the same 100% of the time.

False. There are numerous words which have the same spelling and
different pronunciations. For example 今日 can be either /kyou/ "today"
or /kon'niti/ "every day; daily". This is one reason why learning to
read Japanese is so difficult for westerners. An even bigger reason is
that countless words can be spelled in a number of different ways, with
each spelling having different nuances and implications (sometimes to
the point where the spellings are not interchangeable).


> Anyway the point of all this is that in English you have more freedom and
> more power, and hence you use (abuse?) the syntax of the language to create
> words, sentences, and phrases that express very powerful things.
> Furthermore, they are things that almost all English speakers would be able
> to grasp the full meaning of what you've said.

All natural languages are Thinking-complete. Just like with
Turing-complete programming languages, the only difference is where they
hide the bodies. There are plenty of things that I as a native speaker
of English could say which other natives would grasp but which would
confuse many of my non-native yet fluent coworkers. The Japanese abuse
their syntax just as badly as we abuse English, or far far worse if we
include the slang of youth culture. The Japanese have long thought of
their language as a toy box ripe for experimentation, and you can see
the effects of this everywhere.

--
Live well,
~wren

It is loading more messages.
0 new messages